The Linkielist

Linking ideas with the world

The Linkielist

T-Mobile Is Selling Your App and Web History to Advertisers allowing extremely fine personal targetting (they say)

In yet another example of T-Mobile being The Worst with its customer’s data, the company announced a new money-making scheme this week: selling its customers’ app download data and web browsing history to advertisers.

The package of data is part of the company’s new “App Insights” adtech product that was in beta for the last year but formally rolled out this week. According to AdExchanger, which first reported news of the announcement from the Cannes Festival, the new product will let marketers track and target T-Mobile customers based on the apps they’ve downloaded and their “engagement patterns”—meaning when or how

These same “patterns” also include the types of domains a person visits in their mobile web browser. All of this data gets bundled up into what the company calls “personas,” which let marketers microtarget someone by their phone habits. One example that T-Mobile’s head of ad products, Jess Zhu, told AdExchanger was that a person with a human resources app on their phone who also tends to visit, say, Expedia’s website, might be grouped as a “business traveler.” The company noted that there’s no personas built on “gender or cultural identity”—so a person who visits a lot of, say, Christian websites and has a Bible app or two installed won’t be profiled based on that.

“App Insights transforms this data into actionable insights. Marketers can see app usage, growth, and retention and compare activity between brands and product categories,” a T-Mobile statement read.

T-Mobile (and Sprint, by association) certainly aren’t the only carriers pawning off this data; as Ars Technica first noted last year, Verizon overrode customer’s privacy preferences to sell off their browsing and app-usage data. And while AT&T had initially planned to sell access to similar data nearly a decade ago, the company currently claims that it exclusively uses “non-sensitive information” like your age range and zip code to serve up targeted ads.

But T-Mobile also won’t stop marketers from taking things into their own hands. One ad agency exec that spoke with AdExchanger said that one of the “most exciting” things about this new ad product is the ability to microtarget members of the LGBTQ community. Sure, that’s not one of the prebuilt personas offered in the App Insights product, “but a marketer could target phones with Grindr installed, for example, or use those audiences for analytics,” the original interview notes.

[…]

Source: T-Mobile Is Hawking Your App and Web History to Advertisers

Valorant will start listening in to and recording your voice chat in July

Riot Games will begin background evaluation of recorded in-game voice communications on July 13th in North America, in English. In a brief statement (opens in new tab) Riot said that the purpose of the recording is ultimately to “collect clear evidence that could verify any violations of behavioral policies.”

For now, however, recordings will be used to develop the evaluation system that may eventually be implemented. That means training some kind of language model using the recordings, says Riot, to “get the tech in a good enough place for a beta launch later this year.”

Riot also makes clear that voice evaluation from this test will not be used for reports. “We know that before we can even think of expanding this tool, we’ll have to be confident it’s effective, and if mistakes happen, we have systems in place to make sure we can correct any false positives (or negatives for that matter),” said Riot.

Source: Valorant will start listening to your voice chat in July | PC Gamer

Oh, not used for reports. That’s ok then. No problem invading your privacy there then.

Coinbase Is Selling Data on Crypto and ‘Geotracking’ to ICE

Coinbase Tracer, the analytics arm of the cryptocurrency exchange Coinbase, has signed a contract with U.S. Immigrations and Customs Enforcement that would allow the agency access to a variety of features and data caches, including “historical geo tracking data.”

Coinbase Tracer, according to the website, is for governments, crypto businesses, and financial institutions. It allows these clients the ability to trace transactions within the blockchain. It is also used to “investigate illicit activities including money laundering and terrorist financing” and “screen risky crypto transactions to ensure regulatory compliance.”

The deal was originally signed September 2021, but the contract was only now obtained by watchdog group Tech Inquiry. The deal was made for a maximum amount of $1.37 million, and we knew at the time that this was a three year contract for Coinbase’s analytic software. The now revealed contract allows us to look more into what this deal entails.

This deal will allow ICE to track transactions made through twelve different currencies, including Ethereum, Tether, and Bitcoin. Other features include “Transaction demixing and shielded transaction analysis,” which appears to be aimed at preventing users from laundering funds or hiding transactions. Another feature is the ability to “Multi-hop link analysis for incoming and outgoing funds” which would give ICE insight into the transfer of the currencies. The most mysterious one is access to “historical geo tracking data,” and ICE gave a little insight into how this tool may be used.

[…]

Source: Coinbase Is Selling Data on Crypto and ‘Geotracking’ to ICE

New Firefox privacy feature strips URLs of tracking parameters

Numerous companies, including Facebook, Marketo, Olytics, and HubSpot, utilize custom URL query parameters to track clicks on links.

For example, Facebook appends a fbclid query parameter to outbound links to track clicks, with an example of one of these URLs shown below.

https://www.example.com/?fbclid=IwAR4HesRZLT-fxhhh3nZ7WKsOpaiFzsg4nH0K4WLRHw1h467GdRjaLilWbLs

With the release of Firefox 102, Mozilla has added the new ‘Query Parameter Stripping’ feature that automatically strips various query parameters used for tracking from URLs when you open them, whether that be by clicking on a link or simply pasting the URL into the address bar.

Once enabled, Mozilla Firefox will now strip the following tracking parameters from URLs when you click on links or paste an URL into the address bar:

  • Olytics: oly_enc_id=, oly_anon_id=
  • Drip: __s=
  • Vero: vero_id=
  • HubSpot: _hsenc=
  • Marketo: mkt_tok=
  • Facebook: fbclid=, mc_eid=

[…]

To enable Query Parameter Stripping, go into the Firefox Settings, click on Privacy & Security, and then change ‘Enhanced Tracking Protection’ to ‘Strict.’

Mozilla Firefox's Enhanced Tracking Protection set to Strict
Mozilla Firefox’s Enhanced Tracking Protection set to Strict
Source: BleepingComputer

However, these tracking parameters will not be stripped in Private Mode even with Strict mode enabled.

To also enable the feature in Private Mode, enter about:config in the address bar, search for strip, and set the ‘privacy.query_stripping.enabled.pbmode‘ option to true, as shown below.

Enable privacy.query_stripping.enabled.pbmode setting
Enable privacy.query_stripping.enabled.pbmode setting
Source: BleepingComputer

It should be noted that setting Enhanced Tracking Protection to Strict could cause issues when using particular sites.

If you enable this feature and find that sites are not working correctly, just set it back to Standard (disables this feature) or the Custom setting, which will require some tweaking.

Source: New Firefox privacy feature strips URLs of tracking parameters

Spain, Austria not convinced location data is personal

[…]

EU privacy group NOYB (None of your business), set up by privacy warrior Max “Angry Austrian” Schrems, said on Tuesday it appealed a decision of the Spanish Data Protection Authority (AEPD) to support Virgin Telco’s refusal to provide the location data it has stored about a customer.

In Spain, according to NOYB, the government still requires telcos to record the metadata of phone calls, text messages, and cell tower connections, despite Court of Justice (CJEU) decisions that prohibit data retention.

A Spanish customer demanded that Virgin reveal his personal data, as allowed under the GDPR. Article 15 of the GDPR guarantees individuals the right to obtain their personal data from companies that process and store it.

[…]

Virgin, however, refused to provide the customer’s location data when a complaint was filed in December 2021, arguing that only law enforcement authorities may demand that information. And the AEPD sided with the company.

NOYB says that Virgin Telco failed to explain why Article 15 should not apply since the law contains no such limitation.

“The fundamental right to access is comprehensive and clear: users are entitled to know what data a company collects and processes about them – including location data,” argued Felix Mikolasch, a data protection attorney at NOYB, in a statement. “This is independent from the right of authorities to access such data. In this case, there is no relevant exception from the right to access.”

[…]

The group said it filed a similar appeal last November in Austria, where that country’s data protection authority similarly supported Austrian mobile provider A1’s refusal to turn over customer location data. In that case, A1’s argument was that location data should not be considered personal data because someone else could have used the subscriber phone that generated it.

[…]

Location data is potentially worth billions. According to Fortune Business Insights, the location analytics market is expected to bring in $15.76 billion in 2022 and $43.97 billion by 2029.

Outside the EU, the problem is the availability of location data, rather than lack of access. In the US, where there’s no federal data protection framework, the government is a major buyer of location data – it’s more convenient than getting a warrant.

And companies that can obtain location data, often through mobile app SDKs, appear keen to monetize it.

In 2020, the FCC fined the four largest wireless carriers in the US for failing to protect customer location data in accordance with a 2018 commitment to do so.

Source: Spain, Austria not convinced location data is personal • The Register

Facebook and Anti-Abortion Clinics Are Collecting Highly Sensitive Info on Would-Be Patients

Facebook is collecting ultra-sensitive personal data about abortion seekers and enabling anti-abortion organizations to use that data as a tool to target and influence people online, in violation of its own policies and promises.

In the wake of a leaked Supreme Court opinion signaling the likely end of nationwide abortion protections, privacy experts are sounding alarms about all the ways people’s data trails could be used against them if some states criminalize abortion.

A joint investigation by Reveal from The Center for Investigative Reporting and The Markup found that the world’s largest social media platform is already collecting data about people who visit the websites of hundreds of crisis pregnancy centers, which are quasi-health clinics, mostly run by religiously aligned organizations whose mission is to persuade people to choose an option other than abortion.

[…]

Reveal and The Markup have found Facebook’s code on the websites of hundreds of anti-abortion clinics. Using Blacklight, a Markup tool that detects cookies, keyloggers and other types of user-tracking technology on websites, Reveal analyzed the sites of nearly 2,500 crisis pregnancy centers – with data provided by the University of Georgia – and found that at least 294 shared visitor information with Facebook. In many cases, the information was extremely sensitive – for example, whether a person was considering abortion or looking to get a pregnancy test or emergency contraceptives.

[…]

Source: Facebook and Anti-Abortion Clinics Are Collecting Highly Sensitive Info on Would-Be Patients – Reveal

Testing firm Cignpost can profit from sale of Covid swabs with customer DNA

A large Covid-19 testing provider is being investigated by the UK’s data privacy watchdog over its plans to sell swabs containing customers’ DNA for medical research.

Source: Testing firm can profit from sale of Covid swabs | News | The Sunday Times

Find you: an airtag which Apple can’t find in unwanted tracking

[…]

In one exemplary stalking case, a fashion and fitness model discovered an AirTag in her coat pocket after having received a tracking warning notification from her iPhone. Other times, AirTags were placed in expensive cars or motorbikes to track them from parking spots to their owner’s home, where they were then stolen.

On February 10, Apple addressed this by publishing a news statement titled “An update on AirTag and unwanted tracking” in which they describe the way they are currently trying to prevent AirTags and the Find My network from being misused and what they have planned for the future.

[…]

Apple needs to incorporate non-genuine AirTags into their threat model, thus implementing security and anti-stalking features into the Find My protocol and ecosystem instead of in the AirTag itself, which can run modified firmware or not be an AirTag at all (Apple devices currently have no way to distinguish genuine AirTags from clones via Bluetooth).

The source code used for the experiment can be found here.

Edit: I have been made aware of a research paper titled “Who Tracks the Trackers?” (from November 2021) that also discusses this idea and includes more experiments. Make sure to check it out as well if you’re interested in the topic!

[…]

Now Amazon to put creepy AI cameras in UK delivery vans

Amazon is installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

The technology was first deployed, with numerous errors that reportedly denied drivers’ bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers’ driving behavior for safety reasons. The same system is now being rolled out to vehicles in the UK.

Multiple cameras are placed under the front mirror. One is directed at the person behind the wheel, one faces the road, and two are located on either side to provide a wider view. The cameras do not record constant video, and are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what’s going on in and around the vehicle. Delivery drivers can also activate the cameras to record footage if they want to, such as if someone’s trying to rob them or run them off the road. There is no microphone, for what it’s worth.

Audio alerts are triggered by some behaviors, such as if a driver fails to brake at a stop sign or is driving too fast. Other actions are silently logged, such as if the driver doesn’t wear a seat-belt or if a camera’s view is blocked. Amazon, reportedly in the US at least, records workers and calculates from their activities a score that affects their pay; drivers have previously complained of having bonuses unfairly deducted for behavior the computer system wrongly classified as reckless.

[…]

Source: Now Amazon to put ‘creepy’ AI cameras in UK delivery vans • The Register

Twitter fined $150 million after selling 2FA phone numbers + email addresses to targeting advertisers

Twitter has agreed to pay a $150 million fine after federal law enforcement officials accused the social media company of illegally using peoples’ personal data over six years to help sell targeted advertisements.

In court documents made public on Wednesday, the Federal Trade Commission and the Department of Justice say Twitter violated a 2011 agreement with regulators in which the company vowed to not use information gathered for security purposes, like users’ phone numbers and email addresses, to help advertisers target people with ads.

Federal investigators say Twitter broke that promise.

“As the complaint notes, Twitter obtained data from users on the pretext of harnessing it for security purposes but then ended up also using the data to target users with ads,” said FTC Chair Lina Khan.

Twitter requires users to provide a telephone number and email address to authenticate accounts. That information also helps people reset their passwords and unlock their accounts when the company blocks logging in due to suspicious activity.

But until at least September 2019, Twitter was also using that information to boost its advertising business by allowing advertisers access to users’ phone numbers and email addresses. That ran afoul of the agreement the company had with regulators.

[…]

Source: Twitter will pay a $150 million fine over accusations it improperly sold user data : NPR

Clearview AI Ordered to Purge U.K. Face Scans, Pay GBP 7.5m Fine

The United Kingdom has had it with creepy facial recognition firm Clearview AI. Under a new enforcement rule from the U.K.’s Information Commissioner’s office, Clearview must cease the collection and use of publicly available U.K. data and delete all data of U.K. residents from their database. The order, which will also require the company to pay a £7,552,800 ($9,507,276) fine, effectively calls on Clearview to purge U.K. residents from its massive face database reportedly consisting of over 20 billion images scrapped from publicly available social media sites.

The ICO ruling which determined Clearview violated U.K. privacy laws, comes on the heels of a multi-year joint investigation with the Australian Information Commissioner. According to the ICO ruling, Clearview failed to use U.K. resident data in a way that was fair and transparent and failed to provide a lawful reason for collecting the data in the first place. Clearview also failed, the ICO notes, to put in place measures to stop U.K resident data from having their data collected indefinitely and supposedly didn’t meet higher data protection standards outlined in the EU’s General Data Protection Regulation.

[…]

Source: Clearview AI Ordered to Purge U.K. Face Scans, Pay Fine

Your data’s auctioned off up to 987 times a day, NGO reports

The average American has their personal information shared in an online ad bidding war 747 times a day. For the average EU citizen, that number is 376 times a day. In one year, 178 trillion instances of the same bidding war happen online in the US and EU.

That’s according to data shared by the Irish Council on Civil Liberties in a report detailing the extent of real-time bidding (RTB), the technology that drives almost all online advertising and which it said relies on sharing of personal information without user consent.

The RTB industry was worth more than $117 billion last year, the ICCL report said. As with all things in its study, those numbers only apply to the US and Europe, which means the actual value of the market is likely much higher.

Real-time bidding involves the sharing of information about internet users, and it happens whenever a user lands on a website that serves ads. Information shared with advertisers can include nearly anything that would help them better target ads, and those advertisers bid on the ad space based on the information the ad network provides.

That data can be practically anything based on the Interactive Advertising Bureau’s (IAB) audience taxonomy. The basics, of course, like age, sex, location, income and the like are included, but it doesn’t stop there. All sorts of websites fingerprint their visitors – even charities treating mental health conditions – and those fingerprints can later be used to target ads on unrelated websites.

Google owns the largest ad network that was included in the ICCL’s report, and it alone offers RTB data to 4,698 companies in just the US. Other large advertising networks include Xandr, owned by Microsoft since late 2021, Verizon, PubMatic and more.

Not included in ICCL’s report are Amazon or Facebook’s RTB networks, as the industry figures it used for its report don’t include their ad networks. Along with only surveying part of the world that likely means that the scope of the RTB industry is, again, much larger.

Also, it’s probably illegal

The ICCL describes RTB as “the biggest data breach ever recorded,” but even that may be giving advertisers too much credit: Calling freely-broadcast RTB data a breach implies action was taken to bypass defenses, of which there aren’t any.

So, is RTB violating any laws at all? Yes, claims Gartner Privacy Research VP Nader Henein. He told The Register that the adtech industry justifies its use of RTB under the “legitimate interest” provision of the EU’s General Data Protection Regulation (GDR).

“Multiple regulators have rejected that assessment, so the answer would be ‘yes,’ it is a violation [of the GDPR],” Henein opined.

As far back as 2019, Google and other adtech giants were accused by the UK of knowingly breaking the law by using RTB, a case it continues to investigate. Earlier this year, the Belgian data protect authority ruled that RTB practices violated the GDPR and required organizations working with the IAB to delete all the data collected through the use of TC strings, a type of coded character used in the RTB process.

[…]

Source: Privacy. Ad bidders haven’t heard of it, report reveals

New EU rules would require chat apps to scan private messages for child abuse

The European Commission has proposed controversial new regulation that would require chat apps like WhatsApp and Facebook Messenger to selectively scan users’ private messages for child sexual abuse material (CSAM) and “grooming” behavior. The proposal is similar to plans mooted by Apple last year but, say critics, much more invasive.

After a draft of the regulation leaked earlier this week, privacy experts condemned it in the strongest terms. “This document is the most terrifying thing I’ve ever seen,” tweeted cryptography professor Matthew Green. “It describes the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR. Not an exaggeration.”

Jan Penfrat of digital advocacy group European Digital Rights (EDRi) echoed the concern, saying, “This looks like a shameful general #surveillance law entirely unfitting for any free democracy.” (A comparison of the PDFs shows differences between the leaked draft and final proposal are cosmetic only.)

The regulation would establish a number of new obligations for “online service providers” — a broad category that includes app stores, hosting companies, and any provider of “interpersonal communications service.”

The most extreme obligations would apply to communications services like WhatsApp, Signal, and Facebook Messenger. If a company in this group receives a “detection order” from the EU they would be required to scan select users’ messages to look for known child sexual abuse material as well as previously unseen CSAM and any messages that may constitute “grooming” or the “solicitation of children.” These last two categories of content would require the use of machine vision tools and AI systems to analyze the context of pictures and text messages.

[…]

“The proposal creates the possibility for [the orders] to be targeted but doesn’t require it,” Ella Jakubowska, a policy advisor at EDRi, told The Verge. “It completely leaves the door open for much more generalized surveillance.”

[…]

 

Source: New EU rules would require chat apps to scan private messages for child abuse – The Verge

Web ad firms scrape email addresses before you press the submit button

Tracking, marketing, and analytics firms have been exfiltrating the email addresses of internet users from web forms prior to submission and without user consent, according to security researchers.

Some of these firms are said to have also inadvertently grabbed passwords from these forms.

In a research paper scheduled to appear at the Usenix ’22 security conference later this year, authors Asuman Senol (imec-COSIC, KU Leuven), Gunes Acar (Radboud University), Mathias Humbert (University of Lausanne) and Frederik Zuiderveen Borgesius, (Radboud University) describe how they measured data handling in web forms on the top 100,000 websites, as ranked by research site Tranco.

The boffins created their own software to measure email and password data gathering from web forms – structured web input boxes through which site visitors can enter data and submit it to a local or remote application.

Providing information through a web form by pressing the submit button generally indicates the user has consented to provide that information for a specific purpose. But web pages, because they run JavaScript code, can be programmed to respond to events prior to a user pressing a form’s submit button.

And many companies involved in data gathering and advertising appear to believe that they’re entitled to grab the information website visitors enter into forms with scripts before the submit button has been pressed.

[…]

“Furthermore, we find incidental password collection on 52 websites by third-party session replay scripts,” the researchers say.

Replay scripts are designed to record keystrokes, mouse movements, scrolling behavior, other forms of interaction, and webpage contents in order to send that data to marketing firms for analysis. In an adversarial context, they’d be called keyloggers or malware; but in the context of advertising, somehow it’s just session-replay scripts.

[…]

Source: Web ad firms scrape email addresses before you know it • The Register

Indian Government Now Wants VPNs To Collect And Turn Over Personal Data On Users

The government of India still claims to be a democracy, but its decade-long assault on the internet and the rights of its citizens suggests it would rather be an autocracy.

The country is already host to one of the largest biometric databases in the world, housing information collected from nearly every one of its 1.2 billion citizens. And it’s going to be expanded, adding even more biometric markers from people arrested and detained.

The government has passed laws shifting liability for third-party content to service providers, as well as requiring them to provide 24/7 assistance to the Indian government for the purpose of removing “illegal” content. Then there are mandates on compelled access — something that would require broken/backdoored encryption. (The Indian government — like others demanding encryption backdoors — refuses to acknowledge this is what it’s seeking.)

In the name of cybersecurity, the Indian government is now seeking to further undermine the privacy of its citizens.

[…]

The new directions issued by CERT-In also require virtual asset, exchange, and custodian wallet providers to maintain records on KYC and financial transactions for a period of five years. Companies providing cloud, virtual private network (VPN) will also have to register validated names, emails, and IP addresses of subscribers.

Taking the “P” out of “VPN:” that’s the way forward for the Indian government, which has apparently decided to emulate China’s strict control of internet use. And it’s yet another way the Indian government is stripping citizens of their privacy and anonymity. The government of India wants to know everything about its constituents while remaining vague and opaque about its own actions and goals.

Source: Indian Government Now Wants VPNs To Collect And Turn Over Personal Data On Users | Techdirt

Hackers are reportedly using emergency data requests to extort women and minors

In response to fraudulent legal requests, companies like Apple, Google, Meta and Twitter have been tricked into sharing sensitive personal information about some of their customers. We knew that was happening as recently as last month when Bloomberg published a report on hackers using fake emergency data requests to carry out financial fraud. But according to a newly published report from the outlet, some malicious individuals are also using the same tactics to target women and minors with the intent of extorting them into sharing sexually explicit images and videos of themselves.

It’s unclear how many fake data requests the tech giants have fielded since they appear to come from legitimate law enforcement agencies. But what makes the requests particularly effective as an extortion tactic is that the victims have no way of protecting themselves other than by not using the services offered by those companies.

[…]

Part of what has allowed the fake requests to slip through is that they abuse how the industry typically handles emergency appeals. Among most tech companies, it’s standard practice to share a limited amount of information with law enforcement in response to “good faith” requests related to situations involving imminent danger.

Typically, the information shared in those instances includes the name of the individual, their IP, email and physical address. That might not seem like much, but it’s usually enough for bad actors to harass, dox or SWAT their target. According to Bloomberg, there have been “multiple instances” of police showing up at the homes and schools of underage women.

[…]

Source: Hackers are reportedly using emergency data requests to extort women and minors | Engadget

Brave’s De-AMP feature bypasses harmful Google AMP pages

Brave announced a new feature for its browser on Tuesday: De-AMP, which automatically jumps past any page rendered with Google’s Accelerated Mobile Pages framework and instead takes users straight to the original website. “Where possible, De-AMP will rewrite links and URLs to prevent users from visiting AMP pages altogether,” Brave said in a blog post. “And in cases where that is not possible, Brave will watch as pages are being fetched and redirect users away from AMP pages before the page is even rendered, preventing AMP / Google code from being loaded and executed.”

Brave framed De-AMP as a privacy feature and didn’t mince words about its stance toward Google’s version of the web. “In practice, AMP is harmful to users and to the Web at large,” Brave’s blog post said, before explaining that AMP gives Google even more knowledge of users’ browsing habits, confuses users, and can often be slower than normal web pages. And it warned that the next version of AMP — so far just called AMP 2.0 — will be even worse.

Brave’s stance is a particularly strong one, but the tide has turned hard against AMP over the last couple of years. Google originally created the framework in order to simplify and speed up mobile websites, and AMP is now managed by a group of open-source contributors. It was controversial from the very beginning and smelled to some like Google trying to exert even more control over the web. Over time, more companies and users grew concerned about that control and chafed at the idea that Google would prioritize AMP pages in search results. Plus, the rest of the internet eventually figured out how to make good mobile sites, which made AMP — and similar projects like Facebook Instant Articles — less important.

A number of popular apps and browser extensions make it easy for users to skip over AMP pages, and in recent years, publishers (including The Verge’s parent company Vox Media) have moved away from using it altogether. AMP has even become part of the antitrust fight against Google: a lawsuit alleged that AMP helped centralize Google’s power as an ad exchange and that Google made non-AMP ads load slower.

[…]

Source: Brave’s De-AMP feature bypasses ‘harmful’ Google AMP pages – The Verge

Cisco’s Webex phoned home audio telemetry even when muted

Boffins at two US universities have found that muting popular native video-conferencing apps fails to disable device microphones – and that these apps have the ability to access audio data when muted, or actually do so.

The research is described in a paper titled, “Are You Really Muted?: A Privacy Analysis of Mute Buttons in Video Conferencing Apps,” [PDF] by Yucheng Yang (University of Wisconsin-Madison), Jack West (Loyola University Chicago), George K. Thiruvathukal (Loyola University Chicago), Neil Klingensmith (Loyola University Chicago), and Kassem Fawaz (University of Wisconsin-Madison).

The paper is scheduled to be presented at the Privacy Enhancing Technologies Symposium in July.

[…]

Among the apps studied – Zoom (Enterprise), Slack, Microsoft Teams/Skype, Cisco Webex, Google Meet, BlueJeans, WhereBy, GoToMeeting, Jitsi Meet, and Discord – most presented only limited or theoretical privacy concerns.

The researchers found that all of these apps had the ability to capture audio when the mic is muted but most did not take advantage of this capability. One, however, was found to be taking measurements from audio signals even when the mic was supposedly off.

“We discovered that all of the apps in our study could actively query (i.e., retrieve raw audio) the microphone when the user is muted,” the paper says. “Interestingly, in both Windows and macOS, we found that Cisco Webex queries the microphone regardless of the status of the mute button.”

They found that Webex, every minute or so, sends network packets “containing audio-derived telemetry data to its servers, even when the microphone was muted.”

[…]

Worse still from a security standpoint, while other apps encrypted their outgoing data stream before sending it to the operating system’s socket interface, Webex did not.

“Only in Webex were we able to intercept plaintext immediately before it is passed to the Windows network socket API,” the paper says, noting that the app’s monitoring behavior is inconsistent with the Webex privacy policy.

The app’s privacy policy states Cisco Webex Meetings does not “monitor or interfere with you your [sic] meeting traffic or content.”

[…]

Source: Cisco’s Webex phoned home audio telemetry even when muted • The Register

Mega-Popular Muslim Prayer Apps Were Secretly Harvesting Phone Numbers

Google recently booted over a dozen apps from its Play Store—among them Muslim prayer apps with 10 million-plus downloads, a barcode scanner, and a clock—after researchers discovered secret data-harvesting code hidden within them. Creepier still, the clandestine code was engineered by a company linked to a Virginia defense contractor, which paid developers to incorporate its code into their apps to pilfer users’ data.

While conducting research, researchers came upon a piece of code that had been implanted in multiple apps that was being used to siphon off personal identifiers and other data from devices. The code, a software development kit, or SDK, could “without a doubt be described as malware,” one researcher said.

For the most part, the apps in question appear to have served basic, repetitive functions—the sort that a person might download and then promptly forget about. However, once implanted onto the user’s phone, the SDK-laced programs harvested important data points about the device and its users like phone numbers and email addresses, researchers revealed.

The Wall Street Journal originally reported that the weird, invasive code, was discovered by a pair of researchers, Serge Egelman, and Joel Reardon, both of whom co-founded an organization called AppCensus, which audits mobile apps for user privacy and security. In a blog post on their findings, Reardon writes that AppCensus initially reached out to Google about their findings in October of 2021. However, the apps ultimately weren’t expunged from the Play store until March 25 after Google had investigated, the Journal reports

[…]

Source: Mega-Popular Muslim Prayer Apps Were Secretly Harvesting Phone Numbers

EU, US strike preliminary deal to unlock transatlantic data flows – yup, the EU will let the US spy on it’s citizens freely again

Negotiators have been working on an agreement — which allows Europeans’ personal data to flow to the United States — since the EU’s top court struck down the Privacy Shield agreement in July 2020 because of fears that the data was not safe from access by American agencies once transferred across the Atlantic.

The EU chief’s comments Friday show both sides have reached a political breakthrough, coinciding with U.S. President Joe Biden’s visit to Brussels this week.

“I am pleased that we found an agreement in principle on a new framework for transatlantic data flows. This will enable predictable and trustworthy data flows between the EU and U.S., safeguarding privacy and civil liberties,” she said.

Biden said the framework would allow the EU “to once again authorize transatlantic data flows that help facilitate $7.1 trillion in economic relationships.”

Friday’s announcement will come as a relief to the hundreds of companies that had faced mounting legal uncertainty over how to shuttle everything from payroll information to social media post data to the U.S.

Officials on both sides of the Atlantic had been struggling to bridge an impasse over what it means to give Europeans’ effective legal redress against surveillance by U.S. authorities. Not all of those issues have been resolved, though von der Leyen’s comments Friday suggest technical solutions are within reach.

Despite the ripples of relief Friday’s announcement will send through the business community, any deal is likely to be challenged in the courts by privacy campaigners.

Source: EU, US strike preliminary deal to unlock transatlantic data flows – POLITICO

Messages, Dialer apps sent text, call info to Google

Google’s Messages and Dialer apps for Android devices have been collecting and sending data to Google without specific notice and consent, and without offering the opportunity to opt-out, potentially in violation of Europe’s data protection law.

According to a research paper, “What Data Do The Google Dialer and Messages Apps On Android Send to Google?” [PDF], by Trinity College Dublin computer science professor Douglas Leith, Google Messages (for text messaging) and Google Dialer (for phone calls) have been sending data about user communications to the Google Play Services Clearcut logger service and to Google’s Firebase Analytics service.

“The data sent by Google Messages includes a hash of the message text, allowing linking of sender and receiver in a message exchange,” the paper says. “The data sent by Google Dialer includes the call time and duration, again allowing linking of the two handsets engaged in a phone call. Phone numbers are also sent to Google.”

The timing and duration of other user interactions with these apps has also been transmitted to Google. And Google offers no way to opt-out of this data collection.

[…]

From the Messages app, Google takes the message content and a timestamp, generates a SHA256 hash, which is the output of an algorithm that maps the human readable content to an alphanumeric digest, and then transmits a portion of the hash, specifically a truncated 128-bit value, to Google’s Clearcut logger and Firebase Analytics.

Hashes are designed to be difficult to reverse, but in the case of short messages, Leith said he believes some of these could be undone to recover some of the message content.

“I’m told by colleagues that yes, in principle this is likely to be possible,” Leith said in an email to The Register today. “The hash includes a hourly timestamp, so it would involve generating hashes for all combinations of timestamps and target messages and comparing these against the observed hash for a match – feasible I think for short messages given modern compute power.”

The Dialer app likewise logs incoming and outgoing calls, along with the time and the call duration.

[…]

The paper describes nine recommendations made by Leith and six changes Google has already made or plans to make to address the concerns raised in the paper. The changes Google has agreed to include:

  • Revising the app onboarding flow so that users are notified they’re using a Google app and are presented with a link to Google’s consumer privacy policy.
  • Halting the collection of the sender phone number by the CARRIER_SERVICES log source, of the 5 SIM ICCID, and of a hash of sent/received message text by Google Messages.
  • Halting the logging of call-related events in Firebase Analytics from both Google Dialer and Messages.
  • Shifting more telemetry data collection to use the least long-lived identifier available where possible, rather than linking it to a user’s persistent Android ID.
  • Making it clear when caller ID and spam protection is turned on and how it can be disabled, while also looking at way to use less information or fuzzed information for safety functions.

[…]

Leith said there are two larger matters related to Google Play Service, which is installed on almost all Android phones outside of China.

“The first is that the logging data sent by Google Play Services is tagged with the Google Android ID which can often be linked to a person’s real identity – so the data is not anonymous,” he said. “The second is that we know very little about what data is being sent by Google Play Services, and for what purpose(s). This study is the first to cast some light on that, but it’s very much just the tip of the iceberg.”

Source: Messages, Dialer apps sent text, call info to Google • The Register

HBO hit with class action lawsuit for allegedly sharing subscriber data with Facebook

HBO is facing a class action lawsuit over allegations that it gave subscribers’ viewing history to Facebook without proper permission, Variety has reported. The suit accuses HBO of providing Facebook with customer lists, allowing the social network to match viewing habits with their profiles.

It further alleges that HBO knows Facebook can combine the data because HBO is a major Facebook advertiser — and Facebook can then use that information to retarget ads to its subscribers. Since HBO never received proper customer consent to do this, it allegedly violated the 1988 Video Privacy Protection Act (VPPA), according to the lawsuit.

HBO, like other sites, discloses to users that it (and partners) use cookies to deliver personalized ads. However, the VPPA requires separate consent from users to share their video viewing history. “A standard privacy policy will not suffice,” according to the suit.

Other streaming providers have been hit with similar claims, and TikTok recently agreed to pay a $92 million settlement for (in part) violating the VPPA. In another case, however, a judge ruled in 2015 that Hulu didn’t knowingly share data with Facebook that could establish an individual’s viewing history. The law firm involved in the HBO suit previously won a $50 million settlement with Hearst after alleging that it violated Michigan privacy laws by selling subscriber data.

Source: HBO hit with class action lawsuit for allegedly sharing subscriber data with Facebook | Engadget

Italy slaps creepy webscraping facial recognition firm Clearview AI with €20 million fine

Italy’s data privacy watchdog said it will fine the controversial facial recognition firm Clearview AI for breaching EU law. An investigation by Garante, Italy’s data protection authority, found that the company’s database of 10 billion images of faces includes those of Italians and residents in Italy. The New York City-based firm is being fined €20 million, and will also have to delete any facial biometrics it holds of Italian nationals.

This isn’t the first time that the beleaguered facial recognition tech company is facing legal consequences. The UK data protection authority last November fined the company £17 million after finding its practices—which include collecting selfies of people without their consent from security camera footage or mugshots—violate the nation’s data protection laws. The company has also been banned in Sweden, France and Australia.

The accumulated fines will be a considerable blow for the now five-year old company, completely wiping away the $30 million it raised in its last funding round. But Clearview AI appears to be just getting started. The company is on track to patent its biometric database, which scans faces across public internet data and has been used by law enforcement agencies around the world, including police departments in the United States and a number of federal agencies. A number of Democrats have urged federal agencies to drop their contracts with Clearview AI, claiming that the tool is a severe threat to the privacy of everyday citizens. In a letter to the Department of Homeland Security, Sens. Ed Markey and Jeff Merkley and Reps. Pramila Jayapal and Ayanna Pressley urged regulators to discontinue their use of the tool.

“Clearview AI reportedly scrapes billions of photos from social media sites without permission from or notice to the pictured individuals. In conjunction with the company’s facial recognition capabilities, this trove of personal information is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified,” wrote the authors of the letter.

Despite losing troves of facial recognition data from entire countries, Clearview AI has a plan to rapidly expand this year. The company told investors that it is on track to have 100 billion photos of faces in its database within a year, reported The Washington Post. In its pitch deck, the company said it hopes to secure an additional $50 million from investors to build even more facial recognition tools and ramp up its lobbying efforts.

Source: Italy slaps facial recognition firm Clearview AI with €20 million fine | Engadget

UK Online Safety Bill to require more data to use social media – eg send them your passport

The country’s forthcoming Online Safety Bill will require citizens to hand over even more personal data to largely foreign-headquartered social media platforms, government minister Nadine Dorries has declared.

“The vast majority of social networks used in the UK do not require people to share any personal details about themselves – they are able to identify themselves by a nickname, alias or other term not linked to a legal identity,” said Dorries, Secretary of State for Digital, Culture, Media and Sport (DCMS).

Another legal duty to be imposed on social media platforms will be a requirement to give users a “block” button, something that has been part of most of today’s platforms since their launch.

“When it comes to verifying identities,” said DCMS in a statement, “some platforms may choose to provide users with an option to verify their profile picture to ensure it is a true likeness. Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify.”

“Alternatively,” continued the statement, “verification could include people using a government-issued ID such as a passport to create or update an account.”

Two-factor authentication is a login technology to prevent account hijacking by malicious people, not a method of verifying a user’s government-approved identity.

“People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms,” said Dorries.

Social networks offering services to Britons don’t currently require lots of personal data to register as a user. Most people see this as a benefit; the government seems to see it as a negative.

Today’s statement had led to widespread concerns that DCMS will place UK residents at greater risk of online identity theft or of falling victim to a data breach.

The Online Safety Bill was renamed from the Online Harms Bill shortly before its formal introduction to Parliament. Widely accepted as a disaster in the making by the technically literate, critics have said the bill risks creating an “algorithm-driven censorship future” through new regulations that would make it legally risky for platforms not to proactively censor users’ posts.

It is also closely linked to strong rhetoric discouraging end-to-end encryption rollouts for the sake of “minors”, and its requirements would mean that tech platforms attempting to comply would have to weaken security measures.

Parliamentary efforts at properly scrutinising the draft bill then led to the “scrutineers” instead publishing a manifesto asking for even more stronger legal weapons be included.

[…]

Source: Online Safety Bill to require more data to use social media

EU Data Watchdog Calls for Total Ban of Pegasus Spyware

Israeli authorities say it should be probed and U.S. authorities are calling for it to be sanctioned, but EU officials have a different idea for how to handle Pegasus spyware: just ban that shit entirely.

That’s the main takeaway from a new memo released by EPDS, the Union’s dedicated data watchdog on Tuesday, noting that a full-on ban across the entire region is the only appropriate response to the “unprecedented risks” the tech poses—not only to people’s devices but “to democracy and the rule of law.”

“As the specific technical characteristics of spyware tools like Pegasus make control over their use very difficult, we have to rethink the entire existing system of safeguards established to protect our fundamental rights and freedoms,” the report reads. “Pegasus constitutes a paradigm shift in terms of access to private communications and devices. This fact makes its use incompatible with our democratic values.”

A “paradigm shift” is a good way to describe the tool, which has been used to target a mounting number of civic actors, activists, and political figures from around the globe, including some notable figures from inside the EU. This past summer, local outlets reported that French president Emmanuel Macron surfaced among the list of potential targets that foreign actors had planned to target with the software, and later reports revealed traces of the tech appearing on phones from Macron’s current staffers. Officials from other EU member states like Hungary and Spain have also reported the tech on their devices, and Poland became the latest member to join the list last month when a team of researchers found the spyware being used to surveil three outspoken critics of the Polish government.

[…]

Source: EU Data Watchdog Calls for Total Ban of Pegasus Spyware