Samsung adds ‘repair mode’ to smartphone

When activated, repair mode prevents a range of behaviors – from casual snooping to outright lifting of personal data – by blocking access to photos, messages, and account information.

The mode provides technicians with the access they require to make a fix, including the apps a user employs. But repairers won’t see user data in apps, so content like photos, texts and emails remains secure.

When users enable repair mode their device reboots. To exit, the user reboots again after logging in their normal way and turning the setting off.

Samsung said it is rolling out repair mode via software update, initially on the Galaxy S21 series within South Korea, with more models, and perhaps locations, getting the functionality over time.

Samsung has not explained how the feature works. Android devices already offer the chance to establish accounts for different users, so perhaps Samsung has created a role for repair technicians and made that easier to access.

Most repair technicians won’t want to view or steal a customer’s personal data – but it does happen.

Apple was forced to pay millions last year after two iPhone repair contractors allegedly stole and posted a woman’s nudes to the internet. That fiasco was in no way an isolated incident. In 2019 a Genius Bar employee allegedly texted himself explicit images taken from an iPhone he repaired and was subsequently fired.

[…]

Source: Samsung adds ‘repair mode’ to South Korean smartphone • The Register

Twitter warns of ‘record highs’ in account data requests

Twitter has published its 20th transparency report, and the details still aren’t reassuring to those concerned about abuses of personal info. The social network saw “record highs” in the number of account data requests during the July-December 2021 reporting period, with 47,572 legal demands on 198,931 accounts. The media in particular faced much more pressure. Government demands for data from verified news outlets and journalists surged 103 percent compared to the last report, with 349 accounts under scrutiny.

The largest slice of requests targeting the news industry came from India (114), followed by Turkey (78) and Russia (55). Governments succeeded in withholding 17 tweets.

As in the past, US demands represented a disproportionately large chunk of the overall volume. The country accounted for 20 percent of all worldwide account info requests, and those requests covered 39 percent of all specified accounts. Russia is still the second-largest requester with 18 percent of volume, even if its demands dipped 20 percent during the six-month timeframe.

The company said it was still denying or limiting access to info when possible. It denied 31 percent of US data requests, and either narrowed or shut down 60 percent of global demands. Twitter also opposed 29 civil attempts to identify anonymous US users, citing First Amendment reasons. It sued in two of those cases, and has so far had success with one of those suits. There hasn’t been much success in reporting on national security-related requests in the US, however, and Twitter is still hoping to win an appeal that would let it share more details.

[…]

Source: Twitter warns of ‘record highs’ in account data requests | Engadget

Records reveal the scale of Homeland Security’s phone location data purchases

Investigators raised alarm bells when they learned Homeland Security bureaus were buying phone location data to effectively bypass the Fourth Amendment requirement for a search warrant, and now it’s clearer just how extensive those purchases were. TechCrunch notes the American Civil Liberties Union has obtained records linking Customs and Border Protection, Immigration and Customs Enforcement and other DHS divisions to purchases of roughly 336,000 phone location points from the data broker Venntel. The info represents just a “small subset” of raw data from the southwestern US, and includes a burst of 113,654 points collected over just three days in 2018.

The dataset, delivered through a Freedom of Information Act request, also outlines the agencies’ attempts to justify the bulk data purchases. Officials maintained that users voluntarily offered the data, and that it included no personally identifying information. As TechCrunch explains, though, that’s not necessarily accurate. Phone owners aren’t necessarily aware they opted in to location sharing, and likely didn’t realize the government was buying that data. Moreover, the data was still tied to specific devices — it wouldn’t have been difficult for agents to link positions to individuals.

Some Homeland Security workers expressed internal concerns about the location data. One senior director warned that the Office of Science and Technology bought Venntel info without getting a necessaryPrivacy Threshold Assessment. At one point, the department even halted all projects using Venntel data after learning that key legal and privacy questions had gone unanswered.

More details could be forthcoming, as Homeland Security is still expected to provide more documents in response to the FOIA request. We’ve asked Homeland Security and Venntel for comment. However, the ACLU report might fuel legislative efforts to ban these kinds of data purchases, including the Senate’s bipartisan Fourth Amendment is Not For Sale Act as well as the more recently introduced Health and Location Data Protection Act.

Source: Records reveal the scale of Homeland Security’s phone location data purchases | Engadget

Amazon Ring Tells Sen. Markey It Won’t Enhance Doorbell Privacy, will listen in to long range conversations

Ring is rejecting the request of a U.S. senator to introduce privacy-enhancing changes to its flagship doorbell video camera after product testing showed the device capable of recording conversations well beyond the doorsteps of its many millions of customers. Security and privacy experts expressed alarm at the quality of the distant recordings, raising concerns about the potential for blackmail, stalking, and other forms of invasion

In a letter to the company last month, Sen. Ed Markey, a Democrat of Massachusetts, said Ring was capturing “significant amounts of audio on private and public property adjacent to dwellings with Ring doorbells,” putting the right to “assemble, move, and converse without being tracked” at risk.

Markey did not asked the company to adjust the range of the device, but adjust the doorbell’s settings so audio wouldn’t be recorded by default. Ring, which was acquired by retail giant Amazon in 2018, rejected the idea, arguing that doing so would be a “negative experience” for customers, who might easily get confused by the settings “in an emergency situation.” What’s more, Ring appeared to reject a request never to link the devices to voice recognition software, offering only that it hasn’t done so thus far.

Experts such as Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, have said the device is particularly harmful to the privacy of individuals who live in close quarters — think apartment buildings and condos — where they may be unknowingly recorded the moment they open their doors.

[…]

Source: Amazon Ring Tells Sen. Markey It Won’t Enhance Doorbell Privacy

Amazon’s Ring gave a record amount of doorbell footage to the US government in 2021

Ring, the maker of internet-connected video doorbells and security cameras, said in its latest transparency report that it turned over a record amount of doorbell footage and other information to U.S. authorities last year.

The Amazon-owned company said in two biannual reports covering 2021 that it received 3,147 legal demands, an increase of about 65% on the year earlier, up from about 1,900 legal demands in 2020.

More than 85% of the legal demands processed were by way of court-issued search warrants, allowing Ring to turn over both information about a Ring user and video footage from those accounts. Ring said it turned over user content in response to about four out of 10 demands it received during the year.

Transparency reports allow U.S. companies to disclose the number of legal law orders they are given over a particular time period, often six-months or a year. But Ring has been criticized for having unusually cozy relationships with about 2,200 police departments around the United States, latest figures show, allowing police to request video doorbell camera footage from homeowners.

Ring said it also notified 648 users during the year that their user information had been requested by law enforcement. According to its law enforcement guidelines, Ring notifies users before disclosing their user information, such as name, address, email address and billing information, unless it is prohibited by way of a secrecy order.

In a new breakout, Ring also revealed it received 2,774 preservation orders, which allow police departments and law enforcement agencies to ask Amazon — not demand — to preserve a user’s account for up to six months to allow the requesting agency to gather enough information to a court-issued order, such as a search warrant.

Amazon executive Brian Huseman told lawmakers in a letter published Wednesday that Ring shared doorbell footage at least 11 times with U.S. authorities so far in 2022 without the consent of the device’s owner, reports Politico. According to the letter, Amazon said it “made a good-faith determination that there was an imminent danger of death or serious physical injury to a person requiring disclosure of information without delay.” Under emergency disclosure orders, companies can respond with data when a requesting agency doesn’t have the time to obtain a court order.

Ring has not yet revealed how many times it has disclosed user data under emergency circumstances in previous years, including its most recent transparency report.

Source: Amazon’s Ring gave a record amount of doorbell footage to the government in 2021 | TechCrunch

China’s cyberspace regulator details data export rules

[…]

The Cyberspace Administration of China’s (CAC) policy was first floated in October 2021 and requires businesses that transfer data offshore to conduct a security review. The requirements kick in when an organization transfers data describing more than 100,000 individuals, or information about critical infrastructure – including that related to communications, finance and transportation. Sensitive data such as fingerprints also trigger the requirement, at a threshold of 10,000 sets of prints.

A Thursday announcement added a detail to the policy: the cutoff date after which the CAC will start counting towards the 100,000 and 10,000 thresholds. Oddly, that date is January 1 … of 2021.

A state official explained in Chinese state-owned media on Thursday that the efforts were necessary due to the digital economy expanding cross-border data activities, and that differences in international legal systems have increased data export security risks, thereby affecting national security and social interest.

The official detailed that the security review should occur prior to signing a contract that includes exporting data overseas. Any approved data export will be valid for two years, at which point the entity must apply again.

[…]

Source: China’s cyberspace regulator details data export rules • The Register

UK + 3 EU countries sign US border deal to share police biometric database

[…]

LIBE committee member and Pirate Party MEP Patrick Breyer said that during the meeting last week, the committee discovered that the UK – and three EU member states, though their identities were not revealed – had already signed up to reintroduce US visa requirements which grant access to police biometric databases.

In the UK, the Home Office declined the opportunity to deny it was signing up for the scheme. A spokesperson said: “The UK has a long-standing and close partnership with the USA which includes sharing data for specific purposes. We are in regular discussion with them on new proposals or initiatives to improve public safety and enable legitimate travel.”

Under UK law the police can retain an individual’s DNA profile and fingerprint record for up to three years from the date the samples were taken, even if the individual was arrested but not charged, provided the Biometrics Commissioner agrees. Police can also apply for a two-year extension. The same applies to those charged, but not convicted.

According to reports, the US Enhanced Border Security Partnership (EBSP) initiative will be voluntary initially but is set to become mandatory under the US Visa Waiver Program (VWP), which allows visa-free entry into the United States for up to 90 days, by 2027.

MEP Breyer said that when asked exactly what data the US wanted to tap into, the answer was as much as possible. When asked what would happen at US borders if a traveler was known to the police in participating states, it was said that this would be decided by the US immigration officer on a case-by-case basis.

[…]

“If necessary, the visa waiver program must be terminated by Europe as well. Millions of innocent Europeans are listed in police databases and could be exposed to completely disproportionate reactions in the USA.

“The US lacks adequate data and fundamental rights protection. Providing personal data to the US exposes our citizens… to the risk of arbitrary detention and false suspicion, with possible dire consequences, in the course of the US ‘war on terror’. We must protect our citizens from these practices,” Breyer said.

Source: UK signs US border deal to share police biometric database • The Register

T-Mobile Is Selling Your App and Web History to Advertisers allowing extremely fine personal targetting (they say)

In yet another example of T-Mobile being The Worst with its customer’s data, the company announced a new money-making scheme this week: selling its customers’ app download data and web browsing history to advertisers.

The package of data is part of the company’s new “App Insights” adtech product that was in beta for the last year but formally rolled out this week. According to AdExchanger, which first reported news of the announcement from the Cannes Festival, the new product will let marketers track and target T-Mobile customers based on the apps they’ve downloaded and their “engagement patterns”—meaning when or how

These same “patterns” also include the types of domains a person visits in their mobile web browser. All of this data gets bundled up into what the company calls “personas,” which let marketers microtarget someone by their phone habits. One example that T-Mobile’s head of ad products, Jess Zhu, told AdExchanger was that a person with a human resources app on their phone who also tends to visit, say, Expedia’s website, might be grouped as a “business traveler.” The company noted that there’s no personas built on “gender or cultural identity”—so a person who visits a lot of, say, Christian websites and has a Bible app or two installed won’t be profiled based on that.

“App Insights transforms this data into actionable insights. Marketers can see app usage, growth, and retention and compare activity between brands and product categories,” a T-Mobile statement read.

T-Mobile (and Sprint, by association) certainly aren’t the only carriers pawning off this data; as Ars Technica first noted last year, Verizon overrode customer’s privacy preferences to sell off their browsing and app-usage data. And while AT&T had initially planned to sell access to similar data nearly a decade ago, the company currently claims that it exclusively uses “non-sensitive information” like your age range and zip code to serve up targeted ads.

But T-Mobile also won’t stop marketers from taking things into their own hands. One ad agency exec that spoke with AdExchanger said that one of the “most exciting” things about this new ad product is the ability to microtarget members of the LGBTQ community. Sure, that’s not one of the prebuilt personas offered in the App Insights product, “but a marketer could target phones with Grindr installed, for example, or use those audiences for analytics,” the original interview notes.

[…]

Source: T-Mobile Is Hawking Your App and Web History to Advertisers

Valorant will start listening in to and recording your voice chat in July

Riot Games will begin background evaluation of recorded in-game voice communications on July 13th in North America, in English. In a brief statement (opens in new tab) Riot said that the purpose of the recording is ultimately to “collect clear evidence that could verify any violations of behavioral policies.”

For now, however, recordings will be used to develop the evaluation system that may eventually be implemented. That means training some kind of language model using the recordings, says Riot, to “get the tech in a good enough place for a beta launch later this year.”

Riot also makes clear that voice evaluation from this test will not be used for reports. “We know that before we can even think of expanding this tool, we’ll have to be confident it’s effective, and if mistakes happen, we have systems in place to make sure we can correct any false positives (or negatives for that matter),” said Riot.

Source: Valorant will start listening to your voice chat in July | PC Gamer

Oh, not used for reports. That’s ok then. No problem invading your privacy there then.

Coinbase Is Selling Data on Crypto and ‘Geotracking’ to ICE

Coinbase Tracer, the analytics arm of the cryptocurrency exchange Coinbase, has signed a contract with U.S. Immigrations and Customs Enforcement that would allow the agency access to a variety of features and data caches, including “historical geo tracking data.”

Coinbase Tracer, according to the website, is for governments, crypto businesses, and financial institutions. It allows these clients the ability to trace transactions within the blockchain. It is also used to “investigate illicit activities including money laundering and terrorist financing” and “screen risky crypto transactions to ensure regulatory compliance.”

The deal was originally signed September 2021, but the contract was only now obtained by watchdog group Tech Inquiry. The deal was made for a maximum amount of $1.37 million, and we knew at the time that this was a three year contract for Coinbase’s analytic software. The now revealed contract allows us to look more into what this deal entails.

This deal will allow ICE to track transactions made through twelve different currencies, including Ethereum, Tether, and Bitcoin. Other features include “Transaction demixing and shielded transaction analysis,” which appears to be aimed at preventing users from laundering funds or hiding transactions. Another feature is the ability to “Multi-hop link analysis for incoming and outgoing funds” which would give ICE insight into the transfer of the currencies. The most mysterious one is access to “historical geo tracking data,” and ICE gave a little insight into how this tool may be used.

[…]

Source: Coinbase Is Selling Data on Crypto and ‘Geotracking’ to ICE

New Firefox privacy feature strips URLs of tracking parameters

Numerous companies, including Facebook, Marketo, Olytics, and HubSpot, utilize custom URL query parameters to track clicks on links.

For example, Facebook appends a fbclid query parameter to outbound links to track clicks, with an example of one of these URLs shown below.

https://www.example.com/?fbclid=IwAR4HesRZLT-fxhhh3nZ7WKsOpaiFzsg4nH0K4WLRHw1h467GdRjaLilWbLs

With the release of Firefox 102, Mozilla has added the new ‘Query Parameter Stripping’ feature that automatically strips various query parameters used for tracking from URLs when you open them, whether that be by clicking on a link or simply pasting the URL into the address bar.

Once enabled, Mozilla Firefox will now strip the following tracking parameters from URLs when you click on links or paste an URL into the address bar:

  • Olytics: oly_enc_id=, oly_anon_id=
  • Drip: __s=
  • Vero: vero_id=
  • HubSpot: _hsenc=
  • Marketo: mkt_tok=
  • Facebook: fbclid=, mc_eid=

[…]

To enable Query Parameter Stripping, go into the Firefox Settings, click on Privacy & Security, and then change ‘Enhanced Tracking Protection’ to ‘Strict.’

Mozilla Firefox's Enhanced Tracking Protection set to Strict
Mozilla Firefox’s Enhanced Tracking Protection set to Strict
Source: BleepingComputer

However, these tracking parameters will not be stripped in Private Mode even with Strict mode enabled.

To also enable the feature in Private Mode, enter about:config in the address bar, search for strip, and set the ‘privacy.query_stripping.enabled.pbmode‘ option to true, as shown below.

Enable privacy.query_stripping.enabled.pbmode setting
Enable privacy.query_stripping.enabled.pbmode setting
Source: BleepingComputer

It should be noted that setting Enhanced Tracking Protection to Strict could cause issues when using particular sites.

If you enable this feature and find that sites are not working correctly, just set it back to Standard (disables this feature) or the Custom setting, which will require some tweaking.

Source: New Firefox privacy feature strips URLs of tracking parameters

Spain, Austria not convinced location data is personal

[…]

EU privacy group NOYB (None of your business), set up by privacy warrior Max “Angry Austrian” Schrems, said on Tuesday it appealed a decision of the Spanish Data Protection Authority (AEPD) to support Virgin Telco’s refusal to provide the location data it has stored about a customer.

In Spain, according to NOYB, the government still requires telcos to record the metadata of phone calls, text messages, and cell tower connections, despite Court of Justice (CJEU) decisions that prohibit data retention.

A Spanish customer demanded that Virgin reveal his personal data, as allowed under the GDPR. Article 15 of the GDPR guarantees individuals the right to obtain their personal data from companies that process and store it.

[…]

Virgin, however, refused to provide the customer’s location data when a complaint was filed in December 2021, arguing that only law enforcement authorities may demand that information. And the AEPD sided with the company.

NOYB says that Virgin Telco failed to explain why Article 15 should not apply since the law contains no such limitation.

“The fundamental right to access is comprehensive and clear: users are entitled to know what data a company collects and processes about them – including location data,” argued Felix Mikolasch, a data protection attorney at NOYB, in a statement. “This is independent from the right of authorities to access such data. In this case, there is no relevant exception from the right to access.”

[…]

The group said it filed a similar appeal last November in Austria, where that country’s data protection authority similarly supported Austrian mobile provider A1’s refusal to turn over customer location data. In that case, A1’s argument was that location data should not be considered personal data because someone else could have used the subscriber phone that generated it.

[…]

Location data is potentially worth billions. According to Fortune Business Insights, the location analytics market is expected to bring in $15.76 billion in 2022 and $43.97 billion by 2029.

Outside the EU, the problem is the availability of location data, rather than lack of access. In the US, where there’s no federal data protection framework, the government is a major buyer of location data – it’s more convenient than getting a warrant.

And companies that can obtain location data, often through mobile app SDKs, appear keen to monetize it.

In 2020, the FCC fined the four largest wireless carriers in the US for failing to protect customer location data in accordance with a 2018 commitment to do so.

Source: Spain, Austria not convinced location data is personal • The Register

Facebook and Anti-Abortion Clinics Are Collecting Highly Sensitive Info on Would-Be Patients

Facebook is collecting ultra-sensitive personal data about abortion seekers and enabling anti-abortion organizations to use that data as a tool to target and influence people online, in violation of its own policies and promises.

In the wake of a leaked Supreme Court opinion signaling the likely end of nationwide abortion protections, privacy experts are sounding alarms about all the ways people’s data trails could be used against them if some states criminalize abortion.

A joint investigation by Reveal from The Center for Investigative Reporting and The Markup found that the world’s largest social media platform is already collecting data about people who visit the websites of hundreds of crisis pregnancy centers, which are quasi-health clinics, mostly run by religiously aligned organizations whose mission is to persuade people to choose an option other than abortion.

[…]

Reveal and The Markup have found Facebook’s code on the websites of hundreds of anti-abortion clinics. Using Blacklight, a Markup tool that detects cookies, keyloggers and other types of user-tracking technology on websites, Reveal analyzed the sites of nearly 2,500 crisis pregnancy centers – with data provided by the University of Georgia – and found that at least 294 shared visitor information with Facebook. In many cases, the information was extremely sensitive – for example, whether a person was considering abortion or looking to get a pregnancy test or emergency contraceptives.

[…]

Source: Facebook and Anti-Abortion Clinics Are Collecting Highly Sensitive Info on Would-Be Patients – Reveal

Testing firm Cignpost can profit from sale of Covid swabs with customer DNA

A large Covid-19 testing provider is being investigated by the UK’s data privacy watchdog over its plans to sell swabs containing customers’ DNA for medical research.

Source: Testing firm can profit from sale of Covid swabs | News | The Sunday Times

Find you: an airtag which Apple can’t find in unwanted tracking

[…]

In one exemplary stalking case, a fashion and fitness model discovered an AirTag in her coat pocket after having received a tracking warning notification from her iPhone. Other times, AirTags were placed in expensive cars or motorbikes to track them from parking spots to their owner’s home, where they were then stolen.

On February 10, Apple addressed this by publishing a news statement titled “An update on AirTag and unwanted tracking” in which they describe the way they are currently trying to prevent AirTags and the Find My network from being misused and what they have planned for the future.

[…]

Apple needs to incorporate non-genuine AirTags into their threat model, thus implementing security and anti-stalking features into the Find My protocol and ecosystem instead of in the AirTag itself, which can run modified firmware or not be an AirTag at all (Apple devices currently have no way to distinguish genuine AirTags from clones via Bluetooth).

The source code used for the experiment can be found here.

Edit: I have been made aware of a research paper titled “Who Tracks the Trackers?” (from November 2021) that also discusses this idea and includes more experiments. Make sure to check it out as well if you’re interested in the topic!

[…]

Now Amazon to put creepy AI cameras in UK delivery vans

Amazon is installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

The technology was first deployed, with numerous errors that reportedly denied drivers’ bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers’ driving behavior for safety reasons. The same system is now being rolled out to vehicles in the UK.

Multiple cameras are placed under the front mirror. One is directed at the person behind the wheel, one faces the road, and two are located on either side to provide a wider view. The cameras do not record constant video, and are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what’s going on in and around the vehicle. Delivery drivers can also activate the cameras to record footage if they want to, such as if someone’s trying to rob them or run them off the road. There is no microphone, for what it’s worth.

Audio alerts are triggered by some behaviors, such as if a driver fails to brake at a stop sign or is driving too fast. Other actions are silently logged, such as if the driver doesn’t wear a seat-belt or if a camera’s view is blocked. Amazon, reportedly in the US at least, records workers and calculates from their activities a score that affects their pay; drivers have previously complained of having bonuses unfairly deducted for behavior the computer system wrongly classified as reckless.

[…]

Source: Now Amazon to put ‘creepy’ AI cameras in UK delivery vans • The Register

Twitter fined $150 million after selling 2FA phone numbers + email addresses to targeting advertisers

Twitter has agreed to pay a $150 million fine after federal law enforcement officials accused the social media company of illegally using peoples’ personal data over six years to help sell targeted advertisements.

In court documents made public on Wednesday, the Federal Trade Commission and the Department of Justice say Twitter violated a 2011 agreement with regulators in which the company vowed to not use information gathered for security purposes, like users’ phone numbers and email addresses, to help advertisers target people with ads.

Federal investigators say Twitter broke that promise.

“As the complaint notes, Twitter obtained data from users on the pretext of harnessing it for security purposes but then ended up also using the data to target users with ads,” said FTC Chair Lina Khan.

Twitter requires users to provide a telephone number and email address to authenticate accounts. That information also helps people reset their passwords and unlock their accounts when the company blocks logging in due to suspicious activity.

But until at least September 2019, Twitter was also using that information to boost its advertising business by allowing advertisers access to users’ phone numbers and email addresses. That ran afoul of the agreement the company had with regulators.

[…]

Source: Twitter will pay a $150 million fine over accusations it improperly sold user data : NPR

Clearview AI Ordered to Purge U.K. Face Scans, Pay GBP 7.5m Fine

The United Kingdom has had it with creepy facial recognition firm Clearview AI. Under a new enforcement rule from the U.K.’s Information Commissioner’s office, Clearview must cease the collection and use of publicly available U.K. data and delete all data of U.K. residents from their database. The order, which will also require the company to pay a £7,552,800 ($9,507,276) fine, effectively calls on Clearview to purge U.K. residents from its massive face database reportedly consisting of over 20 billion images scrapped from publicly available social media sites.

The ICO ruling which determined Clearview violated U.K. privacy laws, comes on the heels of a multi-year joint investigation with the Australian Information Commissioner. According to the ICO ruling, Clearview failed to use U.K. resident data in a way that was fair and transparent and failed to provide a lawful reason for collecting the data in the first place. Clearview also failed, the ICO notes, to put in place measures to stop U.K resident data from having their data collected indefinitely and supposedly didn’t meet higher data protection standards outlined in the EU’s General Data Protection Regulation.

[…]

Source: Clearview AI Ordered to Purge U.K. Face Scans, Pay Fine

Your data’s auctioned off up to 987 times a day, NGO reports

The average American has their personal information shared in an online ad bidding war 747 times a day. For the average EU citizen, that number is 376 times a day. In one year, 178 trillion instances of the same bidding war happen online in the US and EU.

That’s according to data shared by the Irish Council on Civil Liberties in a report detailing the extent of real-time bidding (RTB), the technology that drives almost all online advertising and which it said relies on sharing of personal information without user consent.

The RTB industry was worth more than $117 billion last year, the ICCL report said. As with all things in its study, those numbers only apply to the US and Europe, which means the actual value of the market is likely much higher.

Real-time bidding involves the sharing of information about internet users, and it happens whenever a user lands on a website that serves ads. Information shared with advertisers can include nearly anything that would help them better target ads, and those advertisers bid on the ad space based on the information the ad network provides.

That data can be practically anything based on the Interactive Advertising Bureau’s (IAB) audience taxonomy. The basics, of course, like age, sex, location, income and the like are included, but it doesn’t stop there. All sorts of websites fingerprint their visitors – even charities treating mental health conditions – and those fingerprints can later be used to target ads on unrelated websites.

Google owns the largest ad network that was included in the ICCL’s report, and it alone offers RTB data to 4,698 companies in just the US. Other large advertising networks include Xandr, owned by Microsoft since late 2021, Verizon, PubMatic and more.

Not included in ICCL’s report are Amazon or Facebook’s RTB networks, as the industry figures it used for its report don’t include their ad networks. Along with only surveying part of the world that likely means that the scope of the RTB industry is, again, much larger.

Also, it’s probably illegal

The ICCL describes RTB as “the biggest data breach ever recorded,” but even that may be giving advertisers too much credit: Calling freely-broadcast RTB data a breach implies action was taken to bypass defenses, of which there aren’t any.

So, is RTB violating any laws at all? Yes, claims Gartner Privacy Research VP Nader Henein. He told The Register that the adtech industry justifies its use of RTB under the “legitimate interest” provision of the EU’s General Data Protection Regulation (GDR).

“Multiple regulators have rejected that assessment, so the answer would be ‘yes,’ it is a violation [of the GDPR],” Henein opined.

As far back as 2019, Google and other adtech giants were accused by the UK of knowingly breaking the law by using RTB, a case it continues to investigate. Earlier this year, the Belgian data protect authority ruled that RTB practices violated the GDPR and required organizations working with the IAB to delete all the data collected through the use of TC strings, a type of coded character used in the RTB process.

[…]

Source: Privacy. Ad bidders haven’t heard of it, report reveals

New EU rules would require chat apps to scan private messages for child abuse

The European Commission has proposed controversial new regulation that would require chat apps like WhatsApp and Facebook Messenger to selectively scan users’ private messages for child sexual abuse material (CSAM) and “grooming” behavior. The proposal is similar to plans mooted by Apple last year but, say critics, much more invasive.

After a draft of the regulation leaked earlier this week, privacy experts condemned it in the strongest terms. “This document is the most terrifying thing I’ve ever seen,” tweeted cryptography professor Matthew Green. “It describes the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR. Not an exaggeration.”

Jan Penfrat of digital advocacy group European Digital Rights (EDRi) echoed the concern, saying, “This looks like a shameful general #surveillance law entirely unfitting for any free democracy.” (A comparison of the PDFs shows differences between the leaked draft and final proposal are cosmetic only.)

The regulation would establish a number of new obligations for “online service providers” — a broad category that includes app stores, hosting companies, and any provider of “interpersonal communications service.”

The most extreme obligations would apply to communications services like WhatsApp, Signal, and Facebook Messenger. If a company in this group receives a “detection order” from the EU they would be required to scan select users’ messages to look for known child sexual abuse material as well as previously unseen CSAM and any messages that may constitute “grooming” or the “solicitation of children.” These last two categories of content would require the use of machine vision tools and AI systems to analyze the context of pictures and text messages.

[…]

“The proposal creates the possibility for [the orders] to be targeted but doesn’t require it,” Ella Jakubowska, a policy advisor at EDRi, told The Verge. “It completely leaves the door open for much more generalized surveillance.”

[…]

 

Source: New EU rules would require chat apps to scan private messages for child abuse – The Verge

Web ad firms scrape email addresses before you press the submit button

Tracking, marketing, and analytics firms have been exfiltrating the email addresses of internet users from web forms prior to submission and without user consent, according to security researchers.

Some of these firms are said to have also inadvertently grabbed passwords from these forms.

In a research paper scheduled to appear at the Usenix ’22 security conference later this year, authors Asuman Senol (imec-COSIC, KU Leuven), Gunes Acar (Radboud University), Mathias Humbert (University of Lausanne) and Frederik Zuiderveen Borgesius, (Radboud University) describe how they measured data handling in web forms on the top 100,000 websites, as ranked by research site Tranco.

The boffins created their own software to measure email and password data gathering from web forms – structured web input boxes through which site visitors can enter data and submit it to a local or remote application.

Providing information through a web form by pressing the submit button generally indicates the user has consented to provide that information for a specific purpose. But web pages, because they run JavaScript code, can be programmed to respond to events prior to a user pressing a form’s submit button.

And many companies involved in data gathering and advertising appear to believe that they’re entitled to grab the information website visitors enter into forms with scripts before the submit button has been pressed.

[…]

“Furthermore, we find incidental password collection on 52 websites by third-party session replay scripts,” the researchers say.

Replay scripts are designed to record keystrokes, mouse movements, scrolling behavior, other forms of interaction, and webpage contents in order to send that data to marketing firms for analysis. In an adversarial context, they’d be called keyloggers or malware; but in the context of advertising, somehow it’s just session-replay scripts.

[…]

Source: Web ad firms scrape email addresses before you know it • The Register

Indian Government Now Wants VPNs To Collect And Turn Over Personal Data On Users

The government of India still claims to be a democracy, but its decade-long assault on the internet and the rights of its citizens suggests it would rather be an autocracy.

The country is already host to one of the largest biometric databases in the world, housing information collected from nearly every one of its 1.2 billion citizens. And it’s going to be expanded, adding even more biometric markers from people arrested and detained.

The government has passed laws shifting liability for third-party content to service providers, as well as requiring them to provide 24/7 assistance to the Indian government for the purpose of removing “illegal” content. Then there are mandates on compelled access — something that would require broken/backdoored encryption. (The Indian government — like others demanding encryption backdoors — refuses to acknowledge this is what it’s seeking.)

In the name of cybersecurity, the Indian government is now seeking to further undermine the privacy of its citizens.

[…]

The new directions issued by CERT-In also require virtual asset, exchange, and custodian wallet providers to maintain records on KYC and financial transactions for a period of five years. Companies providing cloud, virtual private network (VPN) will also have to register validated names, emails, and IP addresses of subscribers.

Taking the “P” out of “VPN:” that’s the way forward for the Indian government, which has apparently decided to emulate China’s strict control of internet use. And it’s yet another way the Indian government is stripping citizens of their privacy and anonymity. The government of India wants to know everything about its constituents while remaining vague and opaque about its own actions and goals.

Source: Indian Government Now Wants VPNs To Collect And Turn Over Personal Data On Users | Techdirt

Hackers are reportedly using emergency data requests to extort women and minors

In response to fraudulent legal requests, companies like Apple, Google, Meta and Twitter have been tricked into sharing sensitive personal information about some of their customers. We knew that was happening as recently as last month when Bloomberg published a report on hackers using fake emergency data requests to carry out financial fraud. But according to a newly published report from the outlet, some malicious individuals are also using the same tactics to target women and minors with the intent of extorting them into sharing sexually explicit images and videos of themselves.

It’s unclear how many fake data requests the tech giants have fielded since they appear to come from legitimate law enforcement agencies. But what makes the requests particularly effective as an extortion tactic is that the victims have no way of protecting themselves other than by not using the services offered by those companies.

[…]

Part of what has allowed the fake requests to slip through is that they abuse how the industry typically handles emergency appeals. Among most tech companies, it’s standard practice to share a limited amount of information with law enforcement in response to “good faith” requests related to situations involving imminent danger.

Typically, the information shared in those instances includes the name of the individual, their IP, email and physical address. That might not seem like much, but it’s usually enough for bad actors to harass, dox or SWAT their target. According to Bloomberg, there have been “multiple instances” of police showing up at the homes and schools of underage women.

[…]

Source: Hackers are reportedly using emergency data requests to extort women and minors | Engadget

Brave’s De-AMP feature bypasses harmful Google AMP pages

Brave announced a new feature for its browser on Tuesday: De-AMP, which automatically jumps past any page rendered with Google’s Accelerated Mobile Pages framework and instead takes users straight to the original website. “Where possible, De-AMP will rewrite links and URLs to prevent users from visiting AMP pages altogether,” Brave said in a blog post. “And in cases where that is not possible, Brave will watch as pages are being fetched and redirect users away from AMP pages before the page is even rendered, preventing AMP / Google code from being loaded and executed.”

Brave framed De-AMP as a privacy feature and didn’t mince words about its stance toward Google’s version of the web. “In practice, AMP is harmful to users and to the Web at large,” Brave’s blog post said, before explaining that AMP gives Google even more knowledge of users’ browsing habits, confuses users, and can often be slower than normal web pages. And it warned that the next version of AMP — so far just called AMP 2.0 — will be even worse.

Brave’s stance is a particularly strong one, but the tide has turned hard against AMP over the last couple of years. Google originally created the framework in order to simplify and speed up mobile websites, and AMP is now managed by a group of open-source contributors. It was controversial from the very beginning and smelled to some like Google trying to exert even more control over the web. Over time, more companies and users grew concerned about that control and chafed at the idea that Google would prioritize AMP pages in search results. Plus, the rest of the internet eventually figured out how to make good mobile sites, which made AMP — and similar projects like Facebook Instant Articles — less important.

A number of popular apps and browser extensions make it easy for users to skip over AMP pages, and in recent years, publishers (including The Verge’s parent company Vox Media) have moved away from using it altogether. AMP has even become part of the antitrust fight against Google: a lawsuit alleged that AMP helped centralize Google’s power as an ad exchange and that Google made non-AMP ads load slower.

[…]

Source: Brave’s De-AMP feature bypasses ‘harmful’ Google AMP pages – The Verge

Cisco’s Webex phoned home audio telemetry even when muted

Boffins at two US universities have found that muting popular native video-conferencing apps fails to disable device microphones – and that these apps have the ability to access audio data when muted, or actually do so.

The research is described in a paper titled, “Are You Really Muted?: A Privacy Analysis of Mute Buttons in Video Conferencing Apps,” [PDF] by Yucheng Yang (University of Wisconsin-Madison), Jack West (Loyola University Chicago), George K. Thiruvathukal (Loyola University Chicago), Neil Klingensmith (Loyola University Chicago), and Kassem Fawaz (University of Wisconsin-Madison).

The paper is scheduled to be presented at the Privacy Enhancing Technologies Symposium in July.

[…]

Among the apps studied – Zoom (Enterprise), Slack, Microsoft Teams/Skype, Cisco Webex, Google Meet, BlueJeans, WhereBy, GoToMeeting, Jitsi Meet, and Discord – most presented only limited or theoretical privacy concerns.

The researchers found that all of these apps had the ability to capture audio when the mic is muted but most did not take advantage of this capability. One, however, was found to be taking measurements from audio signals even when the mic was supposedly off.

“We discovered that all of the apps in our study could actively query (i.e., retrieve raw audio) the microphone when the user is muted,” the paper says. “Interestingly, in both Windows and macOS, we found that Cisco Webex queries the microphone regardless of the status of the mute button.”

They found that Webex, every minute or so, sends network packets “containing audio-derived telemetry data to its servers, even when the microphone was muted.”

[…]

Worse still from a security standpoint, while other apps encrypted their outgoing data stream before sending it to the operating system’s socket interface, Webex did not.

“Only in Webex were we able to intercept plaintext immediately before it is passed to the Windows network socket API,” the paper says, noting that the app’s monitoring behavior is inconsistent with the Webex privacy policy.

The app’s privacy policy states Cisco Webex Meetings does not “monitor or interfere with you your [sic] meeting traffic or content.”

[…]

Source: Cisco’s Webex phoned home audio telemetry even when muted • The Register