A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal, destroyed his digital life with no recourse

It was a Friday night in February 2021. His wife called an advice nurse at their health care provider to schedule an emergency consultation for the next morning, by video because it was a Saturday and there was a pandemic going on. The nurse said to send photos so the doctor could review them in advance.

Mark’s wife grabbed her husband’s phone and texted a few high-quality close-ups of their son’s groin area to her iPhone so she could upload them to the health care provider’s messaging system. In one, Mark’s hand was visible, helping to better display the swelling. Mark and his wife gave no thought to the tech giants that made this quick capture and exchange of digital data possible, or what those giants might think of the images.

[…]

the episode left Mark with a much larger problem, one that would cost him more than a decade of contacts, emails and photos, and make him the target of a police investigation. Mark, who asked to be identified only by his first name for fear of potential reputational harm, had been caught in an algorithmic net designed to snare people exchanging child sexual abuse material.

[…]

“There could be tens, hundreds, thousands more of these,” he said.

Given the toxic nature of the accusations, Callas speculated that most people wrongfully flagged would not publicize what had happened.

“I knew that these companies were watching and that privacy is not what we would hope it to be,” Mark said. “But I haven’t done anything wrong.”

Police agreed. Google did not.

[…]

Two days after taking the photos of his son, Mark’s phone made a blooping notification noise: His account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” A “learn more” link led to a list of possible reasons, including “child sexual abuse and exploitation.”

Mark was confused at first but then remembered his son’s infection. “Oh, God, Google probably thinks that was child porn,” he thought.

[…]

He filled out a form requesting a review of Google’s decision, explaining his son’s infection. At the same time, he discovered the domino effect of Google’s rejection. Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son’s first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn’t get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.

[…]

A few days after Mark filed the appeal, Google responded that it would not reinstate the account, with no further explanation.

Mark didn’t know it, but Google’s review team had also flagged a video he made and the San Francisco Police Department had already started to investigate him.

[…]

Cassio was in the middle of buying a house, and signing countless digital documents, when his Gmail account was disabled. He asked his mortgage broker to switch his email address, which made the broker suspicious until Cassio’s real estate agent vouched for him.

[…]

In December, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Mark’s Google account: his internet searches, his location history, his messages and any document, photo and video he’d stored with the company.

The search, related to “child exploitation videos,” had taken place in February, within a week of his taking the photos of his son.

Mark called the investigator, Nicholas Hillard, who said the case was closed. Hillard had tried to get in touch with Mark, but his phone number and email address hadn’t worked.

“I determined that the incident did not meet the elements of a crime and that no crime occurred,” Hillard wrote in his report. Police had access to all the information Google had on Mark and decided it did not constitute child abuse or exploitation.

Mark asked if Hillard could tell Google that he was innocent so he could get his account back.

“You have to talk to Google,” Hillard said, according to Mark. “There’s nothing I can do.”

Mark appealed his case to Google again, providing the police report, but to no avail. After getting a notice two months ago that his account was being permanently deleted, Mark spoke with a lawyer about suing Google and how much it might cost.

“I decided it was probably not worth $7,000,” he said.

[…]

False positives, when people are erroneously flagged, are inevitable given the billions of images being scanned. While most people would probably consider that trade-off worthwhile, given the benefit of identifying abused children, Klonick said companies need a “robust process” for clearing and reinstating innocent people who are mistakenly flagged.

“This would be problematic if it were just a case of content moderation and censorship,” Klonick said. “But this is doubly dangerous in that it also results in someone being reported to law enforcement.”

It could have been worse, she said, with a parent potentially losing custody of a child. “You could imagine how this might escalate,” Klonick said.

Cassio was also investigated by police. A detective from the Houston Police department called this past fall, asking him to come into the station.

After Cassio showed the detective his communications with the pediatrician, he was quickly cleared. But he, too, was unable to get his decade-old Google account back, despite being a paying user of Google’s web services.

[…]

Source: A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal.

Oracle facing class action over ‘brokering’ personal data of 5 billion people

Oracle is the subject of a class-action suit alleging the software giant created a network containing personal information of hundreds of millions of people and sold the data to third parties.

The case [PDF] is being brought by Johnny Ryan, formerly a policy officer at Brave, maker of the privacy-centric browser, and now part of the Irish Council for Civil Liberties (ICCL), who was behind several challenges to Google, Amazon, and Microsoft’s online advertising businesses.

The ICCL claims Oracle has amassed detailed dossiers on 5 billion people which generates $42.4 billion in annual revenue.

The allegations appear to be based, in part, on an Oracle presentation from 2016 in which Oracle CTO and founder Larry Ellison described how data was collected so businesses could predict purchasing patterns among consumers.

Ellison said at the time [1:15 onward]: “It is a combination of real-time looking at all of their social activity, real-time looking at where they are including, micro-locations – and this is scaring the lawyers [who] are shaking their heads and putting their hands over their eyes – knowing how much time you spend in a specific aisle of a specific store and what is in that aisle of a store. As we collect information about consumers and you combine that with their demographic profile, and their past purchasing behavior, we can do a pretty good job of predicting what they’re going to buy next.”

The ICCL claims Oracle’s dossiers about people include names, home addresses, emails, purchases online and in the real world, physical movements in the real world, income, interests and political views, and a detailed account of online activity.

[…]

 

Source: Oracle facing class action over ‘brokering’ personal data • The Register

Meta fined $402 million in EU over Instagram’s privacy settings for children

Meta has been fined €405 million ($402 million) by the Irish Data Protection Commission for its handling of children’s privacy settings on Instagram, which violated Europe’s General Data Protection Regulation (GDPR). As Politico reports, it’s the second-largest fine to come out of Europe’s GDPR laws, and the third (and largest) fine levied against Meta by the regulator.

A spokesperson for the DPC confirmed the fine, and said additional details about the decision would be available next week. The fine stems from the photo sharing app’s privacy settings on accounts run by children. The DPC had been investigating Instagram over children’s use of business accounts, which made personal data like email addresses and phone numbers publicly visible. The investigation also covered Instagram’s policy of defaulting all new accounts, including teens, to be publicly viewable.

[…]

Source: Meta faces $402 million EU fine over Instagram’s privacy settings for children | Engadget

Major VPN services shut down in India over anti-privacy law

[…]

New rules from India’s Computer Emergency Response Team

India’s Computer Emergency Response Team (CERT) has said that new rules will apply to VPN providers from September 25. These will require services to collect customer names, email addresses, and IP addresses. The data must be retained for at least five years, and handed over to CERT on demand.

This would breach the privacy standards of major VPN services, and be physically impossible for services like NordVPN, which keep no logs as a matter of policy. The company is registered in Panama specifically because there are no data-retention laws there, and no international intelligence sharing.

Major VPN services shut down Indian servers

The Wall Street Journal reports that major VPN services have shut down their Indian servers.

Major global providers of virtual private networks, which let internet users shield their identities online, are shutting down their servers in India to protest new government rules they say threaten their customers’ privacy […]

Such rules are “typically introduced by authoritarian governments in order to gain more control over their citizens,” said a spokeswoman for Nord Security, provider of NordVPN, which has stopped operating its servers in India. “If democracies follow the same path, it has the potential to affect people’s privacy as well as their freedom of speech,” she said […]

Other VPN services that have stopped operating servers in India in recent months are some of the world’s best known. They include U.S.-based Private Internet Access and IPVanish, Canada-based TunnelBear, British Virgin Islands-based ExpressVPN, and Lithuania-based Surfshark.

ExpressVPN said it “refuses to participate in the Indian government’s attempts to limit internet freedom.”

The government’s move “severely undermines the online privacy of Indian residents,” Private Internet Access said.

Customers in India will be able to connect to VPN servers in other countries. This is the same approach taken in Russia and China, where operating servers within those countries would require VPN companies to comply with similar legislation.

[…]

Source: Major VPN services shut down in India over anti-privacy law

FTC Sues Broker Kochava Over Geolocation Data Sales, giving away the data for free for 61m devices

[…] Commissioners voted 4-1 this week to bring a suit against Kochava, Inc., which calls itself the “industry leader for mobile app attribution” and sells mobile geo-location data on hundreds of millions of people. The suit accuses the company of violating the FTC Act, and the agency warns that the company’s business practices could easily be used to unmask the locations of vulnerable individuals—including visitors to reproductive health clinics, homeless and domestic violence shelters, places of worship, and addiction recovery centers.

Kochava, which is based in Idaho, sells “customized data feeds” that can be used to identify and track specific phone users, the FTC said in the suit. Kochava collects this data through a variety of means, then repackages it in large datasets to sell to marketers. The datasets include Mobile Advertising IDs, or MAIDs—the unique identifiers for mobile devices used in targeted advertising—as well as timestamped latitude and longitude coordinates for each device (i.e., the approximate location of the user). The data is ostensibly anonymized, but there are well-known ways to de-anonymize it. The suit claims that Kochava is aware of this, as it has allegedly suggested using its data “to map individual devices to households.”

Subscribing to Kochava’s feeds typically requires a hefty fee, but the FTC says that, until at least June, Kochava also granted interested users free access to a sample of the data. This “free sample” apparently included the location data of about 61 million mobile devices. Authorities say that there were “only minimal steps and no restrictions on usage” of this freely offered information.

[…]

Source: FTC Sues Broker Kochava Over Geolocation Data Sales

Australia fines Google $42.5 million over misleading location settings

Google is being ordered to pay A$60 million ($42.5 million) in penalties to Australia’s competition and national consumer law regulator regarding the collection and use of location data on Android phones.

The financial slap on the wrist relates to a period between January 2017 and December 2018 and follows court action by the Australian Competition and Consumer Commission (ACCC).

According to the regulators, Google misled consumers through the “Location History” setting. Some users were told, according to the ACCC, that the setting “was the only Google account setting that affected whether Google collected, kept and used personally identifiable data about their location.”

It was not. Another setting titled “Web & App Activity” also permitted data to be collected by Google. And it allowed the collection of “personally identifiable location data when it was turned on, and that setting was turned on by default,” the ACCC said.

The “misleading representations,” according to the ACCC, breach Australian consumer law and could have been viewed by the users of 1.3 million Google accounts in Australia. The figure is, however, a best estimate. We’re sure Google doesn’t collect telemetry showing where Android users navigate to either.

Privacy issues aside, the data could also be used by Google to target ads to consumers who thought they’d said no to collection.

Google “took remedial steps” and addressed the issues by December 20, 2018, but the damage was done and the ACCC instituted proceedings in October 2019. In April 2021, the Federal Court found that Google LLC (the US entity) and Google Australia Pty Ltd had breached Australian consumer law.

[…]

Google has come under fire from other quarters regarding the obtaining of customer location data without proper consent. A group of US states sued the search giant earlier this year over “dark patterns” in the user interface to get hold of location information. Then there was the whole creepy Street View Wi-Fi harvesting debacle.

[…]

Source: Australia fines Google over misleading location settings • The Register

Ring surveillance camera footage exploited for “funny clip” show

[…]Ring Nation, a new twist on the popular clip show genre, from MGM Television, Live PD producer Big Fish Entertainment and Ring.

The series, which will launch on September 26, will feature viral videos shared by people from their video doorbells and smart home cameras.

It’s a television take on a genre that has been increasingly going viral on social media.

The series will feature clips such as neighbors saving neighbors, marriage proposals, military reunions and silly animals.

[…]

Source: Wanda Sykes To Host Syndicated Viral Video Show Featuring Ring – Deadline

How this is not a really scary way to try to normalise the constant and low visibility surveillance enacted by these cameras is a puzzle to me. Making it funny that you’re being spied upon from the doors in the streets.

e-HallPass Monitors How Long Kids Are in the Bathroom Is Now in 1,000 American Schools, normalises surveillance

e-HallPass, a digital system that students have to use to request to leave their classroom and which takes note of how long they’ve been away, including to visit the bathroom, has spread into at least a thousand schools around the United States.

The system has some resemblance to the sort of worker monitoring carried out by Amazon, which tracks how long its staff go to the toilet for, and is used to penalize workers for “time off task.” It also highlights how automated tools have led to increased surveillance of students in schools, and employees in places of work.

“This product is just the latest in a growing number of student surveillance tools—designed to allow school administrators to monitor and control student behavior at scale, on and off campus,”

[…]

increased scrutiny offered by surveillance tools “has been shown to be disproportionately targeted against minorities, recent immigrants, LGBTQ kids,” and other marginalized groups.

[…]

Eduspire, the company that makes e-HallPass, told trade publication EdSurge in March that 1,000 schools use the system. Brian Tvenstrup, president of Eduspire, told the outlet that the company’s biggest obstacle to selling the product “is when a school isn’t culturally ready to make these kinds of changes yet.”

[…]

Admins can then access data collected through the software, and view a live dashboard showing details on all passes. e-HallPass can also stop meet-ups of certain students and limit the amount of passes going to certain locations, the website adds, explicitly mentioning  “vandalism and TikTok challenges.” Many of the schools Motherboard identified appear to use e-HallPass specifically on Chromebooks, according to student user guides and similar documents hosted on the schools’ websites, though it also advertises that it can be used to track students on their personal cell phones.

EdSurge reported that some people had taken to Change.org with a petition to remove the “creepy” system from a specific school. Motherboard found over a dozen similar petitions online, including one regarding Independence High School signed nearly 700 times which appears to have been written by a group of students.

[…]

 

Source: A Tool That Monitors How Long Kids Are in the Bathroom Is Now in 1,000 American Schools

Samsung adds ‘repair mode’ to smartphone

When activated, repair mode prevents a range of behaviors – from casual snooping to outright lifting of personal data – by blocking access to photos, messages, and account information.

The mode provides technicians with the access they require to make a fix, including the apps a user employs. But repairers won’t see user data in apps, so content like photos, texts and emails remains secure.

When users enable repair mode their device reboots. To exit, the user reboots again after logging in their normal way and turning the setting off.

Samsung said it is rolling out repair mode via software update, initially on the Galaxy S21 series within South Korea, with more models, and perhaps locations, getting the functionality over time.

Samsung has not explained how the feature works. Android devices already offer the chance to establish accounts for different users, so perhaps Samsung has created a role for repair technicians and made that easier to access.

Most repair technicians won’t want to view or steal a customer’s personal data – but it does happen.

Apple was forced to pay millions last year after two iPhone repair contractors allegedly stole and posted a woman’s nudes to the internet. That fiasco was in no way an isolated incident. In 2019 a Genius Bar employee allegedly texted himself explicit images taken from an iPhone he repaired and was subsequently fired.

[…]

Source: Samsung adds ‘repair mode’ to South Korean smartphone • The Register

Twitter warns of ‘record highs’ in account data requests

Twitter has published its 20th transparency report, and the details still aren’t reassuring to those concerned about abuses of personal info. The social network saw “record highs” in the number of account data requests during the July-December 2021 reporting period, with 47,572 legal demands on 198,931 accounts. The media in particular faced much more pressure. Government demands for data from verified news outlets and journalists surged 103 percent compared to the last report, with 349 accounts under scrutiny.

The largest slice of requests targeting the news industry came from India (114), followed by Turkey (78) and Russia (55). Governments succeeded in withholding 17 tweets.

As in the past, US demands represented a disproportionately large chunk of the overall volume. The country accounted for 20 percent of all worldwide account info requests, and those requests covered 39 percent of all specified accounts. Russia is still the second-largest requester with 18 percent of volume, even if its demands dipped 20 percent during the six-month timeframe.

The company said it was still denying or limiting access to info when possible. It denied 31 percent of US data requests, and either narrowed or shut down 60 percent of global demands. Twitter also opposed 29 civil attempts to identify anonymous US users, citing First Amendment reasons. It sued in two of those cases, and has so far had success with one of those suits. There hasn’t been much success in reporting on national security-related requests in the US, however, and Twitter is still hoping to win an appeal that would let it share more details.

[…]

Source: Twitter warns of ‘record highs’ in account data requests | Engadget

Records reveal the scale of Homeland Security’s phone location data purchases

Investigators raised alarm bells when they learned Homeland Security bureaus were buying phone location data to effectively bypass the Fourth Amendment requirement for a search warrant, and now it’s clearer just how extensive those purchases were. TechCrunch notes the American Civil Liberties Union has obtained records linking Customs and Border Protection, Immigration and Customs Enforcement and other DHS divisions to purchases of roughly 336,000 phone location points from the data broker Venntel. The info represents just a “small subset” of raw data from the southwestern US, and includes a burst of 113,654 points collected over just three days in 2018.

The dataset, delivered through a Freedom of Information Act request, also outlines the agencies’ attempts to justify the bulk data purchases. Officials maintained that users voluntarily offered the data, and that it included no personally identifying information. As TechCrunch explains, though, that’s not necessarily accurate. Phone owners aren’t necessarily aware they opted in to location sharing, and likely didn’t realize the government was buying that data. Moreover, the data was still tied to specific devices — it wouldn’t have been difficult for agents to link positions to individuals.

Some Homeland Security workers expressed internal concerns about the location data. One senior director warned that the Office of Science and Technology bought Venntel info without getting a necessaryPrivacy Threshold Assessment. At one point, the department even halted all projects using Venntel data after learning that key legal and privacy questions had gone unanswered.

More details could be forthcoming, as Homeland Security is still expected to provide more documents in response to the FOIA request. We’ve asked Homeland Security and Venntel for comment. However, the ACLU report might fuel legislative efforts to ban these kinds of data purchases, including the Senate’s bipartisan Fourth Amendment is Not For Sale Act as well as the more recently introduced Health and Location Data Protection Act.

Source: Records reveal the scale of Homeland Security’s phone location data purchases | Engadget

Amazon Ring Tells Sen. Markey It Won’t Enhance Doorbell Privacy, will listen in to long range conversations

Ring is rejecting the request of a U.S. senator to introduce privacy-enhancing changes to its flagship doorbell video camera after product testing showed the device capable of recording conversations well beyond the doorsteps of its many millions of customers. Security and privacy experts expressed alarm at the quality of the distant recordings, raising concerns about the potential for blackmail, stalking, and other forms of invasion

In a letter to the company last month, Sen. Ed Markey, a Democrat of Massachusetts, said Ring was capturing “significant amounts of audio on private and public property adjacent to dwellings with Ring doorbells,” putting the right to “assemble, move, and converse without being tracked” at risk.

Markey did not asked the company to adjust the range of the device, but adjust the doorbell’s settings so audio wouldn’t be recorded by default. Ring, which was acquired by retail giant Amazon in 2018, rejected the idea, arguing that doing so would be a “negative experience” for customers, who might easily get confused by the settings “in an emergency situation.” What’s more, Ring appeared to reject a request never to link the devices to voice recognition software, offering only that it hasn’t done so thus far.

Experts such as Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, have said the device is particularly harmful to the privacy of individuals who live in close quarters — think apartment buildings and condos — where they may be unknowingly recorded the moment they open their doors.

[…]

Source: Amazon Ring Tells Sen. Markey It Won’t Enhance Doorbell Privacy

Amazon’s Ring gave a record amount of doorbell footage to the US government in 2021

Ring, the maker of internet-connected video doorbells and security cameras, said in its latest transparency report that it turned over a record amount of doorbell footage and other information to U.S. authorities last year.

The Amazon-owned company said in two biannual reports covering 2021 that it received 3,147 legal demands, an increase of about 65% on the year earlier, up from about 1,900 legal demands in 2020.

More than 85% of the legal demands processed were by way of court-issued search warrants, allowing Ring to turn over both information about a Ring user and video footage from those accounts. Ring said it turned over user content in response to about four out of 10 demands it received during the year.

Transparency reports allow U.S. companies to disclose the number of legal law orders they are given over a particular time period, often six-months or a year. But Ring has been criticized for having unusually cozy relationships with about 2,200 police departments around the United States, latest figures show, allowing police to request video doorbell camera footage from homeowners.

Ring said it also notified 648 users during the year that their user information had been requested by law enforcement. According to its law enforcement guidelines, Ring notifies users before disclosing their user information, such as name, address, email address and billing information, unless it is prohibited by way of a secrecy order.

In a new breakout, Ring also revealed it received 2,774 preservation orders, which allow police departments and law enforcement agencies to ask Amazon — not demand — to preserve a user’s account for up to six months to allow the requesting agency to gather enough information to a court-issued order, such as a search warrant.

Amazon executive Brian Huseman told lawmakers in a letter published Wednesday that Ring shared doorbell footage at least 11 times with U.S. authorities so far in 2022 without the consent of the device’s owner, reports Politico. According to the letter, Amazon said it “made a good-faith determination that there was an imminent danger of death or serious physical injury to a person requiring disclosure of information without delay.” Under emergency disclosure orders, companies can respond with data when a requesting agency doesn’t have the time to obtain a court order.

Ring has not yet revealed how many times it has disclosed user data under emergency circumstances in previous years, including its most recent transparency report.

Source: Amazon’s Ring gave a record amount of doorbell footage to the government in 2021 | TechCrunch

China’s cyberspace regulator details data export rules

[…]

The Cyberspace Administration of China’s (CAC) policy was first floated in October 2021 and requires businesses that transfer data offshore to conduct a security review. The requirements kick in when an organization transfers data describing more than 100,000 individuals, or information about critical infrastructure – including that related to communications, finance and transportation. Sensitive data such as fingerprints also trigger the requirement, at a threshold of 10,000 sets of prints.

A Thursday announcement added a detail to the policy: the cutoff date after which the CAC will start counting towards the 100,000 and 10,000 thresholds. Oddly, that date is January 1 … of 2021.

A state official explained in Chinese state-owned media on Thursday that the efforts were necessary due to the digital economy expanding cross-border data activities, and that differences in international legal systems have increased data export security risks, thereby affecting national security and social interest.

The official detailed that the security review should occur prior to signing a contract that includes exporting data overseas. Any approved data export will be valid for two years, at which point the entity must apply again.

[…]

Source: China’s cyberspace regulator details data export rules • The Register

UK + 3 EU countries sign US border deal to share police biometric database

[…]

LIBE committee member and Pirate Party MEP Patrick Breyer said that during the meeting last week, the committee discovered that the UK – and three EU member states, though their identities were not revealed – had already signed up to reintroduce US visa requirements which grant access to police biometric databases.

In the UK, the Home Office declined the opportunity to deny it was signing up for the scheme. A spokesperson said: “The UK has a long-standing and close partnership with the USA which includes sharing data for specific purposes. We are in regular discussion with them on new proposals or initiatives to improve public safety and enable legitimate travel.”

Under UK law the police can retain an individual’s DNA profile and fingerprint record for up to three years from the date the samples were taken, even if the individual was arrested but not charged, provided the Biometrics Commissioner agrees. Police can also apply for a two-year extension. The same applies to those charged, but not convicted.

According to reports, the US Enhanced Border Security Partnership (EBSP) initiative will be voluntary initially but is set to become mandatory under the US Visa Waiver Program (VWP), which allows visa-free entry into the United States for up to 90 days, by 2027.

MEP Breyer said that when asked exactly what data the US wanted to tap into, the answer was as much as possible. When asked what would happen at US borders if a traveler was known to the police in participating states, it was said that this would be decided by the US immigration officer on a case-by-case basis.

[…]

“If necessary, the visa waiver program must be terminated by Europe as well. Millions of innocent Europeans are listed in police databases and could be exposed to completely disproportionate reactions in the USA.

“The US lacks adequate data and fundamental rights protection. Providing personal data to the US exposes our citizens… to the risk of arbitrary detention and false suspicion, with possible dire consequences, in the course of the US ‘war on terror’. We must protect our citizens from these practices,” Breyer said.

Source: UK signs US border deal to share police biometric database • The Register

T-Mobile Is Selling Your App and Web History to Advertisers allowing extremely fine personal targetting (they say)

In yet another example of T-Mobile being The Worst with its customer’s data, the company announced a new money-making scheme this week: selling its customers’ app download data and web browsing history to advertisers.

The package of data is part of the company’s new “App Insights” adtech product that was in beta for the last year but formally rolled out this week. According to AdExchanger, which first reported news of the announcement from the Cannes Festival, the new product will let marketers track and target T-Mobile customers based on the apps they’ve downloaded and their “engagement patterns”—meaning when or how

These same “patterns” also include the types of domains a person visits in their mobile web browser. All of this data gets bundled up into what the company calls “personas,” which let marketers microtarget someone by their phone habits. One example that T-Mobile’s head of ad products, Jess Zhu, told AdExchanger was that a person with a human resources app on their phone who also tends to visit, say, Expedia’s website, might be grouped as a “business traveler.” The company noted that there’s no personas built on “gender or cultural identity”—so a person who visits a lot of, say, Christian websites and has a Bible app or two installed won’t be profiled based on that.

“App Insights transforms this data into actionable insights. Marketers can see app usage, growth, and retention and compare activity between brands and product categories,” a T-Mobile statement read.

T-Mobile (and Sprint, by association) certainly aren’t the only carriers pawning off this data; as Ars Technica first noted last year, Verizon overrode customer’s privacy preferences to sell off their browsing and app-usage data. And while AT&T had initially planned to sell access to similar data nearly a decade ago, the company currently claims that it exclusively uses “non-sensitive information” like your age range and zip code to serve up targeted ads.

But T-Mobile also won’t stop marketers from taking things into their own hands. One ad agency exec that spoke with AdExchanger said that one of the “most exciting” things about this new ad product is the ability to microtarget members of the LGBTQ community. Sure, that’s not one of the prebuilt personas offered in the App Insights product, “but a marketer could target phones with Grindr installed, for example, or use those audiences for analytics,” the original interview notes.

[…]

Source: T-Mobile Is Hawking Your App and Web History to Advertisers

Valorant will start listening in to and recording your voice chat in July

Riot Games will begin background evaluation of recorded in-game voice communications on July 13th in North America, in English. In a brief statement (opens in new tab) Riot said that the purpose of the recording is ultimately to “collect clear evidence that could verify any violations of behavioral policies.”

For now, however, recordings will be used to develop the evaluation system that may eventually be implemented. That means training some kind of language model using the recordings, says Riot, to “get the tech in a good enough place for a beta launch later this year.”

Riot also makes clear that voice evaluation from this test will not be used for reports. “We know that before we can even think of expanding this tool, we’ll have to be confident it’s effective, and if mistakes happen, we have systems in place to make sure we can correct any false positives (or negatives for that matter),” said Riot.

Source: Valorant will start listening to your voice chat in July | PC Gamer

Oh, not used for reports. That’s ok then. No problem invading your privacy there then.

Coinbase Is Selling Data on Crypto and ‘Geotracking’ to ICE

Coinbase Tracer, the analytics arm of the cryptocurrency exchange Coinbase, has signed a contract with U.S. Immigrations and Customs Enforcement that would allow the agency access to a variety of features and data caches, including “historical geo tracking data.”

Coinbase Tracer, according to the website, is for governments, crypto businesses, and financial institutions. It allows these clients the ability to trace transactions within the blockchain. It is also used to “investigate illicit activities including money laundering and terrorist financing” and “screen risky crypto transactions to ensure regulatory compliance.”

The deal was originally signed September 2021, but the contract was only now obtained by watchdog group Tech Inquiry. The deal was made for a maximum amount of $1.37 million, and we knew at the time that this was a three year contract for Coinbase’s analytic software. The now revealed contract allows us to look more into what this deal entails.

This deal will allow ICE to track transactions made through twelve different currencies, including Ethereum, Tether, and Bitcoin. Other features include “Transaction demixing and shielded transaction analysis,” which appears to be aimed at preventing users from laundering funds or hiding transactions. Another feature is the ability to “Multi-hop link analysis for incoming and outgoing funds” which would give ICE insight into the transfer of the currencies. The most mysterious one is access to “historical geo tracking data,” and ICE gave a little insight into how this tool may be used.

[…]

Source: Coinbase Is Selling Data on Crypto and ‘Geotracking’ to ICE

New Firefox privacy feature strips URLs of tracking parameters

Numerous companies, including Facebook, Marketo, Olytics, and HubSpot, utilize custom URL query parameters to track clicks on links.

For example, Facebook appends a fbclid query parameter to outbound links to track clicks, with an example of one of these URLs shown below.

https://www.example.com/?fbclid=IwAR4HesRZLT-fxhhh3nZ7WKsOpaiFzsg4nH0K4WLRHw1h467GdRjaLilWbLs

With the release of Firefox 102, Mozilla has added the new ‘Query Parameter Stripping’ feature that automatically strips various query parameters used for tracking from URLs when you open them, whether that be by clicking on a link or simply pasting the URL into the address bar.

Once enabled, Mozilla Firefox will now strip the following tracking parameters from URLs when you click on links or paste an URL into the address bar:

  • Olytics: oly_enc_id=, oly_anon_id=
  • Drip: __s=
  • Vero: vero_id=
  • HubSpot: _hsenc=
  • Marketo: mkt_tok=
  • Facebook: fbclid=, mc_eid=

[…]

To enable Query Parameter Stripping, go into the Firefox Settings, click on Privacy & Security, and then change ‘Enhanced Tracking Protection’ to ‘Strict.’

Mozilla Firefox's Enhanced Tracking Protection set to Strict
Mozilla Firefox’s Enhanced Tracking Protection set to Strict
Source: BleepingComputer

However, these tracking parameters will not be stripped in Private Mode even with Strict mode enabled.

To also enable the feature in Private Mode, enter about:config in the address bar, search for strip, and set the ‘privacy.query_stripping.enabled.pbmode‘ option to true, as shown below.

Enable privacy.query_stripping.enabled.pbmode setting
Enable privacy.query_stripping.enabled.pbmode setting
Source: BleepingComputer

It should be noted that setting Enhanced Tracking Protection to Strict could cause issues when using particular sites.

If you enable this feature and find that sites are not working correctly, just set it back to Standard (disables this feature) or the Custom setting, which will require some tweaking.

Source: New Firefox privacy feature strips URLs of tracking parameters

Spain, Austria not convinced location data is personal

[…]

EU privacy group NOYB (None of your business), set up by privacy warrior Max “Angry Austrian” Schrems, said on Tuesday it appealed a decision of the Spanish Data Protection Authority (AEPD) to support Virgin Telco’s refusal to provide the location data it has stored about a customer.

In Spain, according to NOYB, the government still requires telcos to record the metadata of phone calls, text messages, and cell tower connections, despite Court of Justice (CJEU) decisions that prohibit data retention.

A Spanish customer demanded that Virgin reveal his personal data, as allowed under the GDPR. Article 15 of the GDPR guarantees individuals the right to obtain their personal data from companies that process and store it.

[…]

Virgin, however, refused to provide the customer’s location data when a complaint was filed in December 2021, arguing that only law enforcement authorities may demand that information. And the AEPD sided with the company.

NOYB says that Virgin Telco failed to explain why Article 15 should not apply since the law contains no such limitation.

“The fundamental right to access is comprehensive and clear: users are entitled to know what data a company collects and processes about them – including location data,” argued Felix Mikolasch, a data protection attorney at NOYB, in a statement. “This is independent from the right of authorities to access such data. In this case, there is no relevant exception from the right to access.”

[…]

The group said it filed a similar appeal last November in Austria, where that country’s data protection authority similarly supported Austrian mobile provider A1’s refusal to turn over customer location data. In that case, A1’s argument was that location data should not be considered personal data because someone else could have used the subscriber phone that generated it.

[…]

Location data is potentially worth billions. According to Fortune Business Insights, the location analytics market is expected to bring in $15.76 billion in 2022 and $43.97 billion by 2029.

Outside the EU, the problem is the availability of location data, rather than lack of access. In the US, where there’s no federal data protection framework, the government is a major buyer of location data – it’s more convenient than getting a warrant.

And companies that can obtain location data, often through mobile app SDKs, appear keen to monetize it.

In 2020, the FCC fined the four largest wireless carriers in the US for failing to protect customer location data in accordance with a 2018 commitment to do so.

Source: Spain, Austria not convinced location data is personal • The Register

Facebook and Anti-Abortion Clinics Are Collecting Highly Sensitive Info on Would-Be Patients

Facebook is collecting ultra-sensitive personal data about abortion seekers and enabling anti-abortion organizations to use that data as a tool to target and influence people online, in violation of its own policies and promises.

In the wake of a leaked Supreme Court opinion signaling the likely end of nationwide abortion protections, privacy experts are sounding alarms about all the ways people’s data trails could be used against them if some states criminalize abortion.

A joint investigation by Reveal from The Center for Investigative Reporting and The Markup found that the world’s largest social media platform is already collecting data about people who visit the websites of hundreds of crisis pregnancy centers, which are quasi-health clinics, mostly run by religiously aligned organizations whose mission is to persuade people to choose an option other than abortion.

[…]

Reveal and The Markup have found Facebook’s code on the websites of hundreds of anti-abortion clinics. Using Blacklight, a Markup tool that detects cookies, keyloggers and other types of user-tracking technology on websites, Reveal analyzed the sites of nearly 2,500 crisis pregnancy centers – with data provided by the University of Georgia – and found that at least 294 shared visitor information with Facebook. In many cases, the information was extremely sensitive – for example, whether a person was considering abortion or looking to get a pregnancy test or emergency contraceptives.

[…]

Source: Facebook and Anti-Abortion Clinics Are Collecting Highly Sensitive Info on Would-Be Patients – Reveal

Testing firm Cignpost can profit from sale of Covid swabs with customer DNA

A large Covid-19 testing provider is being investigated by the UK’s data privacy watchdog over its plans to sell swabs containing customers’ DNA for medical research.

Source: Testing firm can profit from sale of Covid swabs | News | The Sunday Times

Find you: an airtag which Apple can’t find in unwanted tracking

[…]

In one exemplary stalking case, a fashion and fitness model discovered an AirTag in her coat pocket after having received a tracking warning notification from her iPhone. Other times, AirTags were placed in expensive cars or motorbikes to track them from parking spots to their owner’s home, where they were then stolen.

On February 10, Apple addressed this by publishing a news statement titled “An update on AirTag and unwanted tracking” in which they describe the way they are currently trying to prevent AirTags and the Find My network from being misused and what they have planned for the future.

[…]

Apple needs to incorporate non-genuine AirTags into their threat model, thus implementing security and anti-stalking features into the Find My protocol and ecosystem instead of in the AirTag itself, which can run modified firmware or not be an AirTag at all (Apple devices currently have no way to distinguish genuine AirTags from clones via Bluetooth).

The source code used for the experiment can be found here.

Edit: I have been made aware of a research paper titled “Who Tracks the Trackers?” (from November 2021) that also discusses this idea and includes more experiments. Make sure to check it out as well if you’re interested in the topic!

[…]

Now Amazon to put creepy AI cameras in UK delivery vans

Amazon is installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

The technology was first deployed, with numerous errors that reportedly denied drivers’ bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers’ driving behavior for safety reasons. The same system is now being rolled out to vehicles in the UK.

Multiple cameras are placed under the front mirror. One is directed at the person behind the wheel, one faces the road, and two are located on either side to provide a wider view. The cameras do not record constant video, and are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what’s going on in and around the vehicle. Delivery drivers can also activate the cameras to record footage if they want to, such as if someone’s trying to rob them or run them off the road. There is no microphone, for what it’s worth.

Audio alerts are triggered by some behaviors, such as if a driver fails to brake at a stop sign or is driving too fast. Other actions are silently logged, such as if the driver doesn’t wear a seat-belt or if a camera’s view is blocked. Amazon, reportedly in the US at least, records workers and calculates from their activities a score that affects their pay; drivers have previously complained of having bonuses unfairly deducted for behavior the computer system wrongly classified as reckless.

[…]

Source: Now Amazon to put ‘creepy’ AI cameras in UK delivery vans • The Register

Twitter fined $150 million after selling 2FA phone numbers + email addresses to targeting advertisers

Twitter has agreed to pay a $150 million fine after federal law enforcement officials accused the social media company of illegally using peoples’ personal data over six years to help sell targeted advertisements.

In court documents made public on Wednesday, the Federal Trade Commission and the Department of Justice say Twitter violated a 2011 agreement with regulators in which the company vowed to not use information gathered for security purposes, like users’ phone numbers and email addresses, to help advertisers target people with ads.

Federal investigators say Twitter broke that promise.

“As the complaint notes, Twitter obtained data from users on the pretext of harnessing it for security purposes but then ended up also using the data to target users with ads,” said FTC Chair Lina Khan.

Twitter requires users to provide a telephone number and email address to authenticate accounts. That information also helps people reset their passwords and unlock their accounts when the company blocks logging in due to suspicious activity.

But until at least September 2019, Twitter was also using that information to boost its advertising business by allowing advertisers access to users’ phone numbers and email addresses. That ran afoul of the agreement the company had with regulators.

[…]

Source: Twitter will pay a $150 million fine over accusations it improperly sold user data : NPR