The Linkielist

Linking ideas with the world

The Linkielist

Gravy Analytics sued for data breach containing location data of millions of smartphones

Gravy Analytics has been sued yet again for allegedly failing to safeguard its vast stores of personal data, which are now feared stolen. And by personal data we mean information including the locations of tens of millions of smartphones, coordinates of which were ultimately harvested from installed apps.

A complaint [PDF], filed in federal court in northern California yesterday, is at least the fourth such lawsuit against Gravy since January, when an unidentified criminal posted screenshots to XSS, a Russian cybercrime forum, to support claims that 17 TB of records had been pilfered from the American analytics outfit’s AWS S3 storage buckets.

The suit this week alleges that massive archive contains the geo-locations of people’s phones.

Gravy Analytics subsequently confirmed it suffered some kind of data security breach, which was discovered on January 4, 2025, in a non-compliance report [PDF] filed with the Norwegian Data Protection Authority and obtained by Norwegian broadcaster NRK.

Three earlier lawsuits – filed in New Jersey on January 14 and 30, and in Virginia on January 31 in the US – make similar allegations.

Gravy Analytics and its subsidiary Venntel were banned from selling sensitive location data by the FTC in December 2024, under a proposed order [PDF] to resolve the agency’s complaint against the companies that was finalized on January 15, 2025.

The FTC complaint alleged the firms “used geofencing, which creates a virtual geographical boundary, to identify and sell lists of consumers who attended certain events related to medical conditions and places of worship and sold additional lists that associate individual consumers to other sensitive characteristics.”

[…]

Source: Gravy Analytics soaks up another sueball over data breach • The Register

Unions Sue to Block Elon Musk’s Access to Americans’ Tax and Benefits Records

A coalition of labor organizations representing federal workers and retirees has sued the Department of the Treasury to block it from giving the newly created Department of Government Efficiency, controlled by Elon Musk, access to the federal government’s sensitive payment systems.

After forcing out a security official who opposed the move, Treasury Secretary Scott Bessent granted DOGE workers access to the system last week, according to The New York Times. Despite its name, DOGE is not a government department but rather an ad-hoc group formed by President Trump purportedly tasked with cutting government spending.

The labor organizations behind the lawsuit filed Monday argue that Bessent broke federal privacy and tax confidentiality laws by giving unauthorized DOGE workers, including people like Musk who are not government employees, the ability to view the private information of anyone who pays taxes or receives money from federal agencies.

With access to the Treasury systems, DOGE representatives can potentially view the names, social security numbers, birth dates, mailing addresses, email addresses, and bank information of tens of millions of people who receive tax refunds, social security and disability payments, veterans benefits, or salaries from the federal government, according to the lawsuit.

“The scale of the intrusion into individuals’ privacy is massive and unprecedented,” according to the complaint filed by the Alliance for Retired Americans, the American Federation of Government Employees, and the Service Employees International Union.

[…]

In their lawsuit, the labor organizations argue that federal law prohibits the disclosure of taxpayer information to anyone except Treasury employees who require it for their official duties unless the disclosure is authorized by a specific law, which DOGE’s access to the system is not. DOGE’s access also violates the Privacy Act of 1974, which prohibits disclosure of personal information to unauthorized people and lays out strict procedures for changing those authorizations, which the Trump administration has not followed, according to the suit.

The plaintiffs have asked the Washington, D.C. district court to grant an injunction preventing unauthorized people from accessing the payment systems and to rule the Treasury’s actions unlawful.

Source: Unions Sue to Block Elon Musk’s Access to Americans’ Tax and Benefits Records

Phone Metadata Suddenly Not So ‘Harmless’ When It’s The FBI’s Data Being Harvested

[…] While trying to fend off attacks on Section 215 collections (most of which are governed [in the loosest sense of the word] by the Third Party Doctrine), the NSA and its domestic-facing remora, the FBI, insisted collecting and storing massive amounts of phone metadata was no more a constitutional violation than it was a privacy violation.

Suddenly — thanks to the ongoing, massive compromising of major US telecom firms by Chinese state-sanctioned hackers — the FBI is getting hot and bothered about the bulk collection of its own phone metadata by (gasp!) a government agency. (h/t Kevin Collier on Bluesky)

FBI leaders have warned that they believe hackers who broke into AT&T Inc.’s system last year stole months of their agents’ call and text logs, setting off a race within the bureau to protect the identities of confidential informants, a document reviewed by Bloomberg News shows.

[…]

The data was believed to include agents’ mobile phone numbers and the numbers with which they called and texted, the document shows. Records for calls and texts that weren’t on the AT&T network, such as through encrypted messaging apps, weren’t part of the stolen data.

The agency (quite correctly!) believes the metadata could be used to identify agents, as well as their contacts and confidential sources. Of course it can.

[…]

The issue, of course, is that the Intelligence Community consistently downplayed this exact aspect of the bulk collection, claiming it was no more intrusive than scanning every piece of domestic mail (!) or harvesting millions of credit card records just because the Fourth Amendment (as interpreted by the Supreme Court) doesn’t say the government can’t.

There are real risks to real people who are affected by hacks like these. The same thing applies when the US government does it. It’s not just a bunch of data that’s mostly useless. Harvesting metadata in bulk allows the US government to do the same thing Chinese hackers are doing with it: identifying individuals, sussing out their personal networks, and building from that to turn numbers into adversarial actions — whether it’s the arrest of suspected terrorists or the further compromising of US government agents by hostile foreign forces.

The takeaway isn’t the inherent irony. It’s that the FBI and NSA spent years pretending the fears expressed by activists and legislators were overblown. Officials repeatedly claimed the information was of almost zero utility, despite mounting several efforts to protect this collection from being shut down by the federal government. In the end, the phone metadata program (at least as it applies to landlines) was terminated. But there’s more than a hint of egregious hypocrisy in the FBI’s sudden concern about how much can be revealed by “just” metadata.

Source: Phone Metadata Suddenly Not So ‘Harmless’ When It’s The FBI’s Data Being Harvested | Techdirt

Venezuela’s Internet Censorship Sparks Surge in VPN Demand

What’s Important to Know:

  • Venezuela’s Supreme Court fined TikTok USD$10 million for failing to prevent viral video challenges that resulted in the deaths of three Venezuelan children.
  • TikTok faced temporary blockades by Internet Service Providers (ISPs) in Venezuela for not paying the fine.
  • ISPs used IP, HTTP, and DNS blocks to restrict access to TikTok and other platforms in early January 2025.
  • While this latest round of blockades was taking place, protests against Nicolás Maduro’s attempt to retain the presidency of Venezuela were happening across the country. The riot police were deployed in all major cities, looking to quell any protesters.
  • A significant surge in demand for VPN services has been observed in Venezuela since the beginning of 2025. Access to some VPN providers’ websites has also been restricted in the country.

In November 2024, Nicolás Maduro announced that two children had died after participating in challenges on TikTok. After a third death was announced by Education Minister Héctor Rodriguez, Venezuela’s Supreme Court issued a $10 million fine against the social media platform for failing to implement measures to prevent such incidents.

The court also ordered TikTok to open an office in Venezuela to oversee content compliance with local laws, giving the platform eight days to comply and pay the fine. TikTok failed to meet the court’s deadline to pay the fine or open an office in the country. As a result, ISPs in Venezuela, including CANTV — the state’s internet provider — temporarily blocked access to TikTok.

The blockades happened on January 7 and later on January 8, lasting several hours each. According to Netblocks.org, various methods were used to restrict access to TikTok, including IP, HTTP, and DNS blocks.

This screenshot shows Netblocks.org report, indicating zero reachability on TikTok using different Venezuelan ISPs.

On January 9, under orders of CONATEL (Venezuela’s telecommunications regulator), CANTV and other private ISPs in the country implemented further blockades to restrict access to TikTok. For instance, they blocked 21 VPN providers along with 33 public DNS services as reported by VeSinFiltro.org.

[…]

vpnMentor’s Research Team first observed a significant surge in the demand for VPN services in the country back in 2024, when X was first blocked. Since then, VPN usage has continued to rise in Venezuela, reaching another remarkable surge in the beginning of 2025. VPN demand grew over 200% only from January 7th to the 8th, totaling a 328% growth from January 1st to January 8th. This upward trend shows signs of further growth according to partial data from January 9th.

The increased demand for VPN services indicates a growing interest in circumventing censorship and accessing restricted content online. This trend suggests that Venezuelan citizens are actively seeking ways to bypass government-imposed restrictions on social media platforms and maintain access to a free flow of information.

[…]

Other Recent VPN Demand Growths

Online platforms are no strangers to geoblocks in different parts of the world. In fact, there have been cases where platforms themselves impose location-based access restrictions to users. For instance, Aylo/Pornhub previously geo-blocked 17 US states in response to age-verification laws that the adult site deemed unjust.

vpnMentor’s Research Team recently published a report about a staggering 1,150% VPN demand surge in Florida following the IP-block of Pornhub in the state.

Source: Venezuela’s Internet Censorship Sparks Surge in VPN Demand

VPN Demand Surge in Florida after Adult Sites Age Restriction Kicks In

What’s important to know:

  • On March 25, 2024 Florida’s Gov. Ron DeSantis signed a law requiring age verification for accessing pornographic sites. This law, known as House Bill 3 (HB3), passed with bipartisan support and has caused quite a stir in the online community.
  • HB3 was set to come into effect on January 1, 2025. It allows hefty fines of up to $50,000 for websites that fail to comply with the regulations.
  • In response to this new legislation, Aylo, the parent company of Pornhub confirmed on December 18, 2024 that it will deny access for all users geo-located in the state as a form of protest to the new age verification requirements imposed by a state law.
  • Pornhub, which registered 3 billion visits from the United States in January 2024, had previously imposed access restrictions in Kentucky, Indiana, Idaho, Kansas, Nebraska, Texas, North Carolina, Montana, Mississippi, Virginia, Arkansas, and Utah. This makes Florida the 13th state without access to their website.

The interesting development following Aylo’s geo-block on Florida IP addresses is the dramatic increase in the demand for Virtual Private Network (VPN) services in the state. A VPN allows users to mask their IP addresses and encrypt their internet traffic, providing an added layer of privacy and security while browsing online.

The vpnMentor Research Team observed a significant surge in VPN usage across the state of Florida, with a staggering increase noted in the first hours of January 1st increasing consistently since the last minutes of 2024 and reaching its peak of 1150% only four hours after the HB3 law came into effect.
Additionally, there was a noteworthy 51% spike in demand for VPN services in the state on December 19, 2024, the day after Aylo released their statement of geo-blocking Florida IP addresses to access their website.

Florida’s new law on pornographic websites and the consequent rise of VPN usage emphasize the intricate interplay between technology, privacy, and regulatory frameworks. With laws pertaining to online activities constantly changing, it is imperative for users and website operators alike to remain knowledgeable about regulations and ensure compliance.

Past VPN Demand Growths

Aylo/Pornhub has previously geo-blocked 12 states all of which have enforced age-verification laws that the adult site deemed unjust.

In May 2023, Pornhub’s banning of Utah-based users caused a 967% spike in VPN demand in the state and last year, the passing of adult-site-related age restriction laws in Texas caused a surge in demand of 234.8% in the state.

Source: VPN Demand Surge in Florida after Adult Sites Age Restriction Kicks In

Google brings back digital fingerprinting to track users for advertising

Google is tracking your online behavior in the name of advertising, reintroducing a data collection process that ingests all of your online signals (from IP address to complex browser information) and pinpoints unique users or devices, also known as “digital fingerprinting.”

The company’s updated platform program policies include relaxed restrictions on advertisers and personalized ad targeting across a range of devices, an outcome of a larger “advertising ecosystem shift” and the advancement of privacy-enhancing technologies (PETs) like on-device processing and trusted execution environments, in the words of the company.

A departure from its longstanding pledge to user choice and privacy, Google argues these technologies offer enough protection for users while also creating “new ways for brands to manage and activate their data safely and securely.” The new feature will be available to advertisers beginning Feb. 16, 2025.

[…]

Contrary to other data collection tools like cookies, digital fingerprinting is difficult to spot, and thus even harder for even privacy-conscious users to erase or block. On Dec. 19, the UK’s Information Commissioner’s Office (ICO) — a data protection and privacy regulator — labeled Google “irresponsible” for the policy change, saying the shift to fingerprinting is an unfair means of tracking users, reducing choice and control over their personal information. The watchdog also warned that the move could encourage riskier advertiser behavior.

“Google itself has previously said that fingerprinting does not meet users’ expectations for privacy, as users cannot easily consent to it as they would cookies. This in turn means they cannot control how their information is collected. To quote Google’s own position on fingerprinting from 2019: ‘We think this subverts user choice and is wrong,'” wrote ICO executive director of regulatory risk Stephen Almond.

The ICO warned that it will intervene if Google cannot demonstrate existing legal requirements for such tech, including options to secure freely-given consent, ensure fair processing, and uphold the right to erasure: “Businesses should not consider fingerprinting a simple solution to the loss of third-party cookies and other cross-site tracking signals.”

Source: Google brings back digital fingerprinting to track users for advertising | Mashable

Google goes to court for collecting data on users who opted out… again…

A federal judge this week rejected Google’s motion to throw out a class-action lawsuit alleging that it invaded the privacy of users who opted out of functionality that records a users’ web and app activities. A jury trial is scheduled for August 2025 in US District Court in San Francisco.

The lawsuit concerns Google’s Web & App Activity (WAA) settings, with the lead plaintiff representing two subclasses of people with Android and non-Android phones who opted out of tracking. “The WAA button is a Google account setting that purports to give users privacy control of Google’s data logging of the user’s web app and activity, such as a user’s searches and activity from other Google services, information associated with the user’s activity, and information about the user’s location and device,” wrote US District Judge Richard Seeborg, the chief judge in the Northern District Of California.

Google says that Web & App Activity “saves your activity on Google sites and apps, including associated info like location, to give you faster searches, better recommendations, and more personalized experiences in Maps, Search, and other Google services.” Google also has a supplemental Web App and Activity setting that the judge’s ruling refers to as “(s)WAA.”

“The (s)WAA button, which can only be switched on if WAA is also switched on, governs information regarding a user’s ‘[Google] Chrome history and activity from sites, apps, and devices that use Google services.’ Disabling WAA also disables the (s)WAA button,” Seeborg wrote.

Google sends data to developers

But data is still sent to third-party app developers through the Google Analytics for Firebase (GA4F), “a free analytical tool that takes user data from the Firebase kit and provides app developers with insight on app usage and user engagement,” the ruling said. GA4F “is integrated in 60 percent of the top apps” and “works by automatically sending to Google a user’s ad interactions and certain identifiers regardless of a user’s (s)WAA settings, and Google will, in turn, provide analysis of that data back to the app developer.”

Plaintiffs have brought claims of privacy invasion under California law. Plaintiffs “present evidence that their data has economic value,” and “a reasonable juror could find that Plaintiffs suffered damage or loss because Google profited from the misappropriation of their data,” Seeborg wrote.

[…]

In a proposed settlement of a different lawsuit, Google last year agreed to delete records reflecting users’ private browsing activities in Chrome’s Incognito mode.

[…]

Google contends that its system is harmless to users. “Google argues that its sole purpose for collecting (s)WAA-off data is to provide these analytic services to app developers. This data, per Google, consists only of non-personally identifiable information and is unrelated (or, at least, not directly related) to any profit-making objectives,” Seeborg wrote.

On the other side, plaintiffs say that Google’s tracking contradicts its “representations to users because it gathers exactly the data Google denies saving and collecting about (s)WAA-off users,” Seeborg wrote. “Moreover, Plaintiffs insist that Google’s practices allow it to personalize ads by linking user ad interactions to any later related behavior—information advertisers are likely to find valuable—leading to Google’s lucrative advertising enterprise built, in part, on (s)WAA-off data unlawfully retrieved.”

[…]

Google, as the judge writes, purports to treat user data as pseudonymous by creating a randomly generated identifier that “permits Google to recognize the particular device and its later ad-related behavior… Google insists that it has created technical barriers to ensure, for (s)WAA-off users, that pseudonymous data is delinked to a user’s identity by first performing a ‘consent check’ to determine a user’s (s)WAA settings.”

Whether this counts as personal information under the law is a question for a jury, the judge wrote. Seeborg pointed to California law that defines personal information to include data that “is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” Given the legal definition, “a reasonable juror could view the (s)WAA-off data Google collected via GA4F, including a user’s unique device identifiers, as comprising a user’s personal information,” he wrote.

[…]

Source: Google loses in court, faces trial for collecting data on users who opted out – Ars Technica

Siri “unintentionally” recorded private convos on phone, watch, then sold them to advertisers; yes those ads are very targeted Apple agrees to pay $95M, laughs to the bank

Apple has agreed to pay $95 million to settle a lawsuit alleging that its voice assistant Siri routinely recorded private conversations that were then shared with third parties and used for targeted ads.

In the proposed class-action settlement—which comes after five years of litigation—Apple admitted to no wrongdoing. Instead, the settlement refers to “unintentional” Siri activations that occurred after the “Hey, Siri” feature was introduced in 2014, where recordings were apparently prompted without users ever saying the trigger words, “Hey, Siri.”

Sometimes Siri would be inadvertently activated, a whistleblower told The Guardian, when an Apple Watch was raised and speech was detected. The only clue that users seemingly had of Siri’s alleged spying was eerily accurate targeted ads that appeared after they had just been talking about specific items like Air Jordans or brands like Olive Garden, Reuters noted (claims which remain disputed).

[…]

It’s currently unknown how many customers were affected, but if the settlement is approved, the tech giant has offered up to $20 per Siri-enabled device for any customers who made purchases between September 17, 2014, and December 31, 2024. That includes iPhones, iPads, Apple Watches, MacBooks, HomePods, iPod touches, and Apple TVs, the settlement agreement noted. Each customer can submit claims for up to five devices.

A hearing when the settlement could be approved is currently scheduled for February 14. If the settlement is certified, Apple will send notices to all affected customers. Through the settlement, customers can not only get monetary relief but also ensure that their private phone calls are permanently deleted.

While the settlement appears to be a victory for Apple users after months of mediation, it potentially lets Apple off the hook pretty cheaply. If the court had certified the class action and Apple users had won, Apple could’ve been fined more than $1.5 billion under the Wiretap Act alone, court filings showed.

But lawyers representing Apple users decided to settle, partly because data privacy law is still a “developing area of law imposing inherent risks that a new decision could shift the legal landscape as to the certifiability of a class, liability, and damages,” the motion to approve the settlement agreement said. It was also possible that the class size could be significantly narrowed through ongoing litigation, if the court determined that Apple users had to prove their calls had been recorded through an incidental Siri activation—potentially reducing recoverable damages for everyone.

“The percentage of those who experienced an unintended Siri activation is not known,” the motion said. “Although it is difficult to estimate what a jury would award, and what claims or class(es) would proceed to trial, the Settlement reflects approximately 10–15 percent of Plaintiffs expected recoverable damages.”

Siri’s unintentional recordings were initially exposed by The Guardian in 2019, plaintiffs’ complaint said. That’s when a whistleblower alleged that “there have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details, and app data.”

[…]

Meanwhile, Google faces a similar lawsuit in the same district from plaintiffs represented by the same firms over its voice assistant, Reuters noted. A win in that suit could affect anyone who purchased “Google’s own smart home speakers, Google Home, Home Mini, and Home Max; smart displays, Google Nest Hub, and Nest Hub Max; and its Pixel smartphones” from approximately May 18, 2016 to today, a December court filing noted. That litigation likely won’t be settled until this fall.

Source: Siri “unintentionally” recorded private convos; Apple agrees to pay $95M – Ars Technica

Android will let you find unknown Bluetooth trackers instead of just warning you about them

The advent of Bluetooth trackers has made it a lot easier to find your bag or keys when they’re lost, but it has also put inconspicuous tracking tools in the hands of people who might misuse them. Apple and Google have both implemented tracker alerts to let you know if there’s an unknown Bluetooth tracker nearby, and now as part of a new update, Google is letting Android users actually locate those trackers, too.

The feature is one of two new tools Google is adding to Find My Device-compatible trackers. The first, “Temporarily Pause Location” is what you’re supposed to enable when you first receive an unknown tracker notification. It blocks your phone from updating its location with trackers for 24 hours. The second, “Find Nearby,” helps you pinpoint where the tracker is if you can’t see it or easily hear it.

By clicking on an unknown tracker notification you’ll be able to see a map of where the tracker was last spotted moving with you. From there, you can play a sound to see if you can locate it (Google says the owner won’t be notified). If you can’t find it, Find Nearby will connect your phone to the tracker over Bluetooth and display a shape that fills in the closer you get to it.

The Find Nearby button and interface from Google's Find My Device network.
Google / Engadget

The tool is identical to what Google offers for locating trackers and devices you actually own, but importantly, you don’t need to use Find My Device or have your own tracker to benefit. Like Google’s original notifications feature, any device running Android 6.0 and up can deal with unknown Bluetooth trackers safely.

Expanding Find Nearby seems like the final step Google needed to take to tamp down Bluetooth tracker misuse, something Apple already does with its Precision Finding tool for AirTags. The companies released a shared standard for spotting unknown Bluetooth trackers regardless of whether you use Android or iOS in May 2024, following the launch of Google’s Find My Device network in April. Both Google and Apple offered their own methods of dealing with unknown trackers before then to prevent trackers from being used for everything from robbery to stalking.

Source: Android will let you find unknown Bluetooth trackers instead of just warning you about them

Singapore to increase road capacity by GPS tracking all vehicles. Because location data is not sensitive and will never be hacked *cough*

Singapore’s Land Transport Authority (LTA) estimated last week that by tracking all vehicles with GPS it will be able to increase road capacity by 20,000 over the next few years.

The densely populated island state is moving from what it calls Electric Road Pricing (ERP) 1.0 to ERP 2.0. The first version used gantries – or automatic tolls – to charge drivers a fee through an in-car device when they used specific roadways during certain hours.

ERP 2.0 sees the vehicle instead tracked through GPS, which can tell where a vehicle is at all operating times.

“ERP 2.0 will provide more comprehensive aggregated traffic information and will be able to operate without physical gantries. We will be able to introduce new ‘virtual gantries,’ which allow for more flexible and responsive congestion management,” explained the LTA.

But the island’s government doesn’t just control inflow into urban areas through toll-like charging – it also aggressively controls the total number of cars operating within its borders.

Singapore requires vehicle owners to bid for a set number of Certificates of Entitlement – costly operating permits valid for only ten years. The result is an increase of around SG$100,000 ($75,500) every ten years, depending on that year’s COE price, on top of a car’s usual price. The high total price disincentivizes mass car ownership, which helps the government manage traffic and emissions.

[…]

Source: Singapore to increase road capacity by GPS tracking vehicles • The Register

Google changes Terms Of Service, now spies on your AI prompts

The new terms come in on November 15th.

4.3 Generative AI Safety and Abuse. Google uses automated safety tools to detect abuse of Generative AI Services. Notwithstanding the “Handling of Prompts and Generated Output” section in the Service Specific Terms, if these tools detect potential abuse or violations of Google’s AUP or Prohibited Use Policy, Google may log Customer prompts solely for the purpose of reviewing and determining whether a violation has occurred. See the Abuse Monitoring documentation page for more information about how logging prompts impacts Customer’s use of the Services.

Source: Google Cloud Platform Terms Of Service

If You Ever Rented From Redbox, Your Private Info Is Up for Grabs

If you’ve ever opted to rent a movie through a Redbox kiosk, your private info is out there waiting for any tinkerer to get their hands on it. One programmer who reverse-engineered a kiosk’s hard drive proved the Redbox machines can cough up transaction histories featuring customers’ names, emails, and rentals going back nearly a decade. It may even have part of your credit card number stored on-device.

[…]

a California-based programmer named Foone Turing, managed to grab an unencrypted file from the internal hard drive containing a file that showed the emails, home addresses, and the rental history for either a fraction or the whole of those who previously used the kiosk.

[…]

Turing told Lowpass that the Redbox stored some financial information on those drives, including the first six and last four digits of each credit card used and “some lower-level transaction details.” The devices did apparently connect to a secure payment system through Redbox’s servers, but the systems stored financial information on a log in a different folder than the rental records. She told us that it’s likely the system only stored the last month of transaction logs.

[…]

Source: If You Ever Rented From Redbox, Your Private Info Is Up for Grabs

Which is a great illustration why there needs to be some regulations about what happens to personal data when a company is sold or goes bust.

Face matching now available on GSA’s login.gov, however it still doesn’t work in minimum 10% of the time

The US government’s General Services Administration’s (GSA) facial matching login service is now generally available to the public and other federal agencies, despite its own recent report admitting the tech is far from perfect.

The GSA announced general availability of remote identity verification (RiDV) technology through login.gov, and the service’s availability to other federal government agencies yesterday. According to the agency, the technology behind the offering is “a new independently certified” solution that complies with the National Institute of Standards and Technology’s (NIST) 800-63 identity assurance level 2 (IAL2) standard.

IAL2 identity verification involves using either remote or in-person verification of a person’s identity via biometric data along with some physical element, like an ID photograph, access to a cellphone number, for example.

“This new IAL2-compliant offering adds proven one-to-one facial matching technology that allows Login.gov to confirm that a live selfie taken by a user matches the photo on a photo ID, such as a driver’s license, provided by the user,” the GSA said.

The Administration noted that the system doesn’t use “one-to-many” face matching technology to compare users to others in its database, and doesn’t use the images for any purpose other than verifying a user’s identity.

[…]

In a report issued by the GSA’s Office of the Inspector General in early 2023, the Administration was called out for saying it implemented IAL2-level identity verification as early as 2018, but never actually supporting the requirements to meet the standard.

“GSA knowingly billed customer agencies over $10 million for services, including alleged IAL2 services that did not meet IAL2 standards,” the report claimed.

[…]

Fast forward to October of last year, and the GSA said it was embracing facial recognition tech on login.gov with plans to test it this year – a process it began in April.  Since then, however, the GSA has published pre-press findings of a study it conducted of five RiDV technologies, finding that they’re still largely unreliable.

The study anonymized the results of the five products, making it unclear which were included in the final pool or how any particular one performed. Generally, however, the report found that the best-performing product still failed 10 percent of the time, and the worst had a false negative rate of 50 percent, meaning its ability to properly match a selfie to a government ID was no better than chance.

Higher rejection rates for people with darker skin tones were also noted in one product, while another was more accurate for people of AAPI descent, but less accurate for everyone else – hardly the equitability the GSA said it wanted in an RiDV product last year.

[…]

It’s unclear what solution has been deployed for use on login.gov. The only firm we can confirm has been involved though the process is LexisNexis, which previously acknowledged to The Register that it has worked with the GSA on login.gov for some time.

That said, LexisNexis’ CEO for government risk solutions told us recently that he’s not convinced the GSA’s focus on adopting IAL2 RiDV solutions at the expense of other biometric verification methods is the best approach.

“Any time you rely on a single tool, especially in the modern era of generative AI and deep fakes … you are going to have this problem,” Haywood “Woody” Talcove told us during a phone interview last month. “I don’t think NIST has gone far enough with this workflow.”

Talcove told us that facial recognition is “pretty easy to game,” and said he wants a multi-layered approach – one that it looks like GSA has declined to pursue given how quickly it’s rolling out a solution.

“What this study shows is that there’s a level of risk being injected into government agencies completely relying on one tool,” Talcove said. “We’ve gotta go further.”

Along with asking the GSA for more details about its chosen RiDV solution, we also asked for some data about its performance. We didn’t get an answer to that question, either.

Source: Face matching now available on GSA’s login.gov • The Register

23andMe is on the brink. What happens to all that genetic DNA data?

[…] The one-and-done nature of Wiles’ experience is indicative of a core business problem with the once high-flying biotech company that is now teetering on the brink of collapse. Wiles and many of 23andMe’s 15 million other customers never returned. They paid once for a saliva kit, then moved on.

Shares of 23andMe are now worth pennies. The company’s valuation has plummeted 99% from its $6 billion peak shortly after the company went public in 2021.

As 23andMe struggles for survival, customers like Wiles have one pressing question: What is the company’s plan for all the data it has collected since it was founded in 2006?

[…]

Andy Kill, a spokesperson for 23andMe, would not comment on what the company might do with its trove of genetic data beyond general pronouncements about its commitment to privacy.

[…]

When signing up for the service, about 80% of 23andMe’s customers have opted in to having their genetic data analyzed for medical research.

[…]

The company has an agreement with pharmaceutical giant GlaxoSmithKline, or GSK, that allows the drugmaker to tap the tech company’s customer data to develop new treatments for disease.

Anya Prince, a law professor at the University of Iowa’s College of Law who focuses on genetic privacy, said those worried about their sensitive DNA information may not realize just how few federal protections exist.

For instance, the Health Insurance Portability and Accountability Act, also known as HIPAA, does not apply to 23andMe since it is a company outside of the health care realm.

[…]

According to the company, all of its genetic data is anonymized, meaning there is no way for GSK, or any other third party, to connect the sample to a real person. That, however, could make it nearly impossible for a customer to renege on their decision to allow researchers to access their DNA data.

“I couldn’t go to GSK and say, ‘Hey, my sample was given to you — I want that taken out — if it was anonymized, right? Because they’re not going to re-identify it just to pull it out of the database,” Prince said.

[…]

the patchwork of state laws governing DNA data makes the generic data of millions potentially vulnerable to being sold off, or even mined by law enforcement.

“Having to rely on a private company’s terms of service or bottom line to protect that kind of information is troubling — particularly given the level of interest we’ve seen from government actors in accessing such information during criminal investigations,” Eidelman said.

She points to how investigators used a genealogy website to identify the man known as the Golden State Killer, and how police homed in on an Idaho murder suspect by turning to similar databases of genetic profiles.

“This has happened without people’s knowledge, much less their express consent,” Eidelman said.

[…]

Last year, the company was hit with a major data breach that it said affected 6.9 million customer accounts, including about 14,000 who had their passwords stolen.

[…]

Some analysts predict that 23andMe could go out of business by next year, barring a bankruptcy proceeding that could potentially restructure the company.

[…]

Source: What happens to all of 23andMe’s genetic DNA data? : NPR

For more fun reading about about this clusterfuck of a company and why giving away DNA data is a spectacularly bad idea:

License Plate Readers Are Creating a US-Wide Database of Cars – and political affiliation, planned parenthood and more

At 8:22 am on December 4 last year, a car traveling down a small residential road in Alabama used its license-plate-reading cameras to take photos of vehicles it passed. One image, which does not contain a vehicle or a license plate, shows a bright red “Trump” campaign sign placed in front of someone’s garage. In the background is a banner referencing Israel, a holly wreath, and a festive inflatable snowman.

Another image taken on a different day by a different vehicle shows a “Steelworkers for Harris-Walz” sign stuck in the lawn in front of someone’s home. A construction worker, with his face unblurred, is pictured near another Harris sign. Other photos show Trump and Biden (including “Fuck Biden”) bumper stickers on the back of trucks and cars across America.

[…]

These images were generated by AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers—all while recording the precise locations of these observations.

[…]

The detailed photographs all surfaced in search results produced by the systems of DRN Data, a license-plate-recognition (LPR) company owned by Motorola Solutions. The LPR system can be used by private investigators, repossession agents, and insurance companies; a related Motorola business, called Vigilant, gives cops access to the same LPR data.

[…]

those with access to the LPR system can search for common phrases or names, such as those of politicians, and be served with photographs where the search term is present, even if it is not displayed on license plates.

[…]

“I searched for the word ‘believe,’ and that is all lawn signs. There’s things just painted on planters on the side of the road, and then someone wearing a sweatshirt that says ‘Believe.’” Weist says. “I did a search for the word ‘lost,’ and it found the flyers that people put up for lost dogs and cats.”

Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people’s personal political views and their homes can be recorded into vast databases that can be queried.

[…]

Over more than a decade, DRN has amassed more than 15 billion “vehicle sightings” across the United States, and it claims in its marketing materials that it amasses more than 250 million sightings per month.

[…]

The system is partly fueled by DRN “affiliates” who install cameras in their vehicles, such as repossession trucks, and capture license plates as they drive around. Each vehicle can have up to four cameras attached to it, capturing images in all angles. These affiliates earn monthly bonuses and can also receive free cameras and search credits.

In 2022, Weist became a certified private investigator in New York State. In doing so, she unlocked the ability to access the vast array of surveillance software accessible to PIs. Weist could access DRN’s analytics system, DRNsights, as part of a package through investigations company IRBsearch. (After Weist published an op-ed detailing her work, IRBsearch conducted an audit of her account and discontinued it.

[…]

While not linked to license plate data, one law enforcement official in Ohio recently said people should “write down” the addresses of people who display yard signs supporting Vice President Kamala Harris, the 2024 Democratic presidential nominee, exemplifying how a searchable database of citizens’ political affiliations could be abused.

[…]

In 2022, WIRED revealed that hundreds of US Immigration and Customs Enforcement employees and contractors were investigated for abusing similar databases, including LPR systems. The alleged misconduct in both reports ranged from stalking and harassment to sharing information with criminals.

[…]

 

Source: License Plate Readers Are Creating a US-Wide Database of More Than Just Cars | WIRED

Insecure Robot Vacuums From Chinese Company Deebot Collect Photos and Audio to Train Their AI

Ecovacs robot vacuums, which have been found to suffer from critical cybersecurity flaws, are collecting photos, videos and voice recordings — taken inside customers’ houses — to train the company’s AI models.

The Chinese home robotics company, which sells a range of popular Deebot models in Australia, said its users are “willingly participating” in a product improvement program.

When users opt into this program through the Ecovacs smartphone app, they are not told what data will be collected, only that it will “help us strengthen the improvement of product functions and attached quality”. Users are instructed to click “above” to read the specifics, however there is no link available on that page.

Ecovacs’s privacy policy — available elsewhere in the app — allows for blanket collection of user data for research purposes, including:

– The 2D or 3D map of the user’s house generated by the device
– Voice recordings from the device’s microphone
— Photos or videos recorded by the device’s camera

“It also states that voice recordings, videos and photos that are deleted via the app may continue to be held and used by Ecovacs…”

Source: Insecure Robot Vacuums From Chinese Company Deebot Collect Photos and Audio to Train Their AI

Dutch oppose Hungary’s approach to EU child sexual abuse regulation – or total surveillance of every smart device

The Netherlands’ government and opposition are both against the latest version of the controversial EU regulation aimed at detecting online child sexual abuse material (CSAM), according to an official position and an open letter published on Tuesday (1 October).

The regulation, aimed at detecting online CSAM, has been criticised for potentially allowing the scanning of private messages on platforms such as WhatsApp or Gmail.

However, the latest compromise text, dated 9 September, limits detection to known material, among other changes. ‘Known’ material refers to content that has already been circulating and detected, in contrast to ‘new’ material that has not yet been identified.

The Hungarian presidency of the Council of the EU shared a partial general approach dated 24 September and seen by Euractiv, that mirrors the 9 September text but reduces the reevaluation period from five years to three for grooming and new CSAM.

Limiting detection to known material could hinder authorities’ ability to surveil massive amounts of communications, suggesting the change is likely an attempt to reconcile privacy concerns.

The Netherlands initially supported the proposal to limit detection to ‘known’ material but withdrew its support in early September, Euractiv reported.

On Tuesday (1 October), Amsterdam officially took a stance against the general approach, despite speculation last week suggesting the country might shift its position in favour of the regulation.

This is also despite the Dutch mostly maintaining that their primary concern lies with combating known CSAM – a focus that aligns with the scope of the latest proposal.

According to various statistics, the Netherlands hosts a significant amount of CSAM.

The Dutch had been considering supporting the proposal, or at least a “silent abstention” that might have weakened the blocking minority, signalling a shift since Friday (27 September), a source close to the matter told Euractiv.

While a change in the Netherlands’ stance could have affected the blocking minority in the EU Council, their current position now strengthens it.

If the draft law were to pass in the EU Council, the next stage would be interinstitutional negotiations, called trilogues, between the European Parliament, the Council of the EU, and the Commission to finalise the legislation.

Both the Dutch government and the opposition are against supporting the new partial general approach.

Opposition party GroenLinks-PvdA (Greens/EFA) published an open letter, also on Tuesday, backed by a coalition of national and EU-based private and non-profit organisations, urging the government to vote against the proposal.

According to the letter, the regulation will be discussed at the Justice and Home Affairs Council on 11 October, with positions coordinated among member states on 2 October.

Currently, an interim regulation allows companies to detect and report online CSAM voluntarily. Originally set to expire in 2024, this measure has been extended to 2026 to avoid a legislative gap, as the draft for a permanent law has yet to be agreed.

The Dutch Secret Service opposed the draft regulation because “introducing a scan application on every mobile phone” with infrastructure to manage the scans would be a complex and extensive system that would introduce risks to digital resilience, according to a decision note.

Source: Dutch oppose Hungary’s approach to EU child sexual abuse regulation – Euractiv

To find out more about how invasive the proposed scanning feature is, look through the articles here: https://www.linkielist.com/?s=csam

Ford wants to listen in on you in your car to serve you ads as much as possible

ford cars with human ears on their doors driving on a highway

Someday soon, if Ford has its way, drivers and passengers may be bombarded with infotainment ads tailored to their personal and vehicle data.

This sure-to-please-everyone idea comes via a patent application [PDF] filed by Ford Global Technologies late last month that proposes displaying ads to drivers based on their destination, route, who’s in the car, and various other data points able to be collected by modern vehicles.

According to the patent application, infotainment advertising could be varied depending on the situation and user feedback. In one example, Ford supposes showing a visual ad to passengers every 10 minutes while on the highway, and if someone responds positively to audio ads, the system could ramp up the frequency, playing audio ads every five minutes.

Of course, simply playing more ads might frustrate people, which Ford seems to understand because the pending patent notes it would have to account for “a user’s natural inclination to seek minimal or no ads.”

In order to assure advertisers that user preference is ultimately circumvented, Ford said its proposed infotainment system would be designed to “intelligently schedule variable durations of ads, with playing time seeking to maximize company revenue while minimizing the impact on user experience.”

The system would also be able to listen to conversations so it could serve ads during lulls in chatter, ostensibly to be less intrusive while being anything but.

Given the rush by some automakers to turning their vehicles into subscription-based cars-as-a-service, egged on by the chip world, we’re not surprised by efforts to wring more money out of motorists, this time with adverts. We assume patent filings similar to Ford’s have been made.

Trust us!

Then there’s the fact that automakers aren’t terrific on privacy and safeguarding the kinds of info that are used to tailor ads. In September last year, Mozilla published a report on the privacy policies of several automakers whose connected vehicles harvest information about owners, finding that 25 major manufacturers – Ford among them – failed to live up to the Firefox maker’s standards.

Just a couple of months later, a Washington state appeals court ruled it was perfectly legal for vehicles to harvest text and call data from connected smartphones and store it all in memory.

US senators have urged the FTC to investigate several car makers for allegedly selling customer data unlawfully, though we note Ford is not among the companies accused in that matter.

That said, the patent application makes no mention of how the automaker would protect user data used to serve in-vehicle ads. A couple of other potentially privacy-infringing Ford patents from the past year are worth mentioning, too.

The ideas within a patent application should not be viewed as an indication of our product plans

In 2023, Ford filed a patent application for an embedded vehicle system that would automate vehicle repossession if car payments weren’t made. Over the summer, another application describes a system where vehicles monitor each other’s speeds, and if one detects a nearby car speeding, it could snap photos using onboard cameras and send the images, along with speed data, directly to police or roadside monitors. Neither have privacy advocates thrilled.

Bear in mind neither of those patents may ever see the production, and this advertising one might not make it past the “let’s file this patent before the competition just in case” stage of life, either. That’s even what Ford essentially told us.

“Submitting patent applications is a normal part of any strong business as the process protects new ideas and helps us build a robust portfolio of intellectual property,” a Ford spokesperson told The Register. “The ideas described within a patent application should not be viewed as an indication of our business or product plans.”

Ford also said it always puts customers first in development of new products and services, though didn’t directly answer questions about a lack of privacy assurances in the patent application. In any case, it may not actually happen. Until it does.

Source: Who wants in-car ads tailored to your journey, passengers? • The Register

Resistance to Hungarian presidency’s new push for child sexual abuse prevention regulation – because it’s a draconian spying law asking for 100% coverage of digital comms

Resistance to the Hungarian presidency’s approach to the EU’s draft law to combat online child sexual abuse material (CSAM) was still palpable during a member states’ meeting on Wednesday (4 September).

The Hungarian presidency of the Council of the EU aims to secure consensus on the proposed law to combat online child sexual abuse material (CSAM) by October, according to an EU diplomat and earlier reports by Politico.

Hungary has prepared a compromise note on the draft law, also reported by Contexte.

The note, presented at a meeting of ambassadors on Wednesday, seeks political guidance to make progress at the technical level, the EU diplomat told Euractiv.

With the voluntary regime expiring in mid-2026, most member states agree that urgent action is needed, the diplomat continued.

But some member states are still resistant to the Hungarian’s latest approach.

The draft law to detect and remove online child sexual abuse material (CSAM) was removed from the agenda of Thursday’s (20 June) meeting of the Committee of Permanent Representatives (COREPER), who were supposed to vote on it.

Sources close to the matter told Euractiv, that Poland and Germany remain opposed to the proposal, with smaller member states also voicing concerns, potentially forming a blocking minority.

Although France and the Netherlands initially supported the proposal, the Netherlands has since withdrawn its support, and Italy has indicated that the new proposal is moving in the right direction.

As a result, no agreement was reached to move forward.

Currently, an interim regulation allows companies to voluntarily detect and report online CSAM. Originally set to expire in 2024, this measure has been extended to 2026 to avoid a legislative gap, as the draft for a permanent law has yet to be agreed.

Hungary is expected to introduce a concrete textual proposal soon. The goal is to agree on its general approach by October, the EU diplomat said, a fully agreed position among member states which serves as the basis for negotiations with the European Parliament.

Meanwhile, the European Commission is preparing to send a detailed opinion to Hungary regarding the draft law, expected by 30 September, Contexte reported on Wednesday.

[…]

In the text, the presidency also suggested extending the temporary exemption from certain provisions of the ePrivacy Directive, which governs privacy and electronic communications, for new CSAM and grooming.

[…]

Source: Resistance lingers to Hungarian presidency’s new push for child sexual abuse prevention regulation – Euractiv

See also:

The EU Commission’s Alleged CSAM Regulation ‘Experts’ giving them free reign to spy on everyone: can’t be found. OK then.

EU delays decision over continuous spying on all your devices *cough* scanning encrypted messages for kiddie porn

Signal, MEPs urge EU Council to drop law that puts a spy on everyone’s devices

European human rights court says backdooring encrypted comms is against human rights

EU Commission’s nameless experts behind its “spy on all EU citizens” *cough* “child sexual abuse” law

EU Trys to Implement Client-Side Scanning, death to encryption By Personalised Targeting of EU Residents With Misleading Ads

 

Dutch DPA fines Clearview €30.5 million for violating the GDPR

Clearview AI is back in hot — and expensive — water, with the Dutch Data Protection Authority (DPA) fining the company €30.5 million ($33.6 million) for violating the General Data Protection Regulation (GDPR). The release explains that Clearview created “an illegal database with billions of photos of faces,” including Dutch individuals, and has failed to properly inform people that it’s using their data. In early 2023, Clearview’s CEO claimed the company had 30 billion images.

Clearview must immediately stop all violations or face up to €5.1 million ($5.6 million) in non-compliance penalties. “Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world,” Dutch DPA chairman Aleid Wolfsen stated. “If there is a photo of you on the Internet — and doesn’t that apply to all of us? — then you can end up in the database of Clearview and be tracked.” He adds that facial recognition can help with safety but that “competent authorities” who are “subject to strict conditions” should handle it rather than a commercial company.

The Dutch DPA further states that since Clearview is breaking the law, using it is also illegal. Wolfsen warns that Dutch companies using Clearview could also be subject to “hefty fines.” Clearview didn’t issue an objection to the Dutch DPA’s fine, so it is unable to launch an appeal.

This fine is far from the first time an entity has stood up against Clearview. In 2020, the LAPD banned its use, and the American Civil Liberties Union (ACLU) sued Clearview, with the settlement ending sales of the biometric database to any private companies. Italy and the UK have previously fined Clearview €20 million ($22 million) and £7.55 million ($10 million), respectively, and instructed the company to delete any data of its residents. Earlier this year, the EU also barred Clearview from untargeted face scraping on the internet.

Source: Clearview faces a €30.5 million for violating the GDPR

Proposal to spy on all chat messages back on European political agenda

Europe is going to talk again about the possibility of checking all chat messages of citizens for child abuse. On September 4, a (secret) consultation will take place, says Patrick Breyer , former MEP for the Pirate Party.

A few years ago, the European Commission came up with the plan to monitor all chat messages of citizens. The European Parliament did not like the proposal of the European Commission and came up with its own proposal, which excludes monitoring of end-to-end encrypted services.

At the end of June, EU President Belgium came up with its own version of the proposal. Namely that only the uploading of photos, video and references to them would be checked. This proposal did not get enough votes.

Germany and Poland are the biggest opponents within the EU anyway. The Netherlands, Estonia, Slovenia, the Czech Republic and Austria would abstain from voting, according to Breyer.

A coalition of almost fifty civil society organisations, including the Dutch Offlimits, Bits of Freedom, Vrijschrift.org and ECNL, called on the European Commission in July to withdraw the chat control proposal and focus on measures that really protect children.

Source: Proposal to control chat messages back on European political agenda – Emerce

Guys, stop trying to be Big Brother in the EU – it changes how people behave and not for the better.

Mozilla removes telemetry service Adjust from mobile Firefox versions – it was spying on you secretly it turns out

Mozilla will soon remove its telemetry service Adjust from the Android and iOS versions of browsers Firefox and Firefox Focus. It appeared that the developer was collecting data on the effectiveness of Firefox ad campaigns without disclosing that.

Mozilla, the developers of Firefox, until recently used the telemetry service Adjust to collect data from its Firefox and Firefox Focus apps for both Android and iOS. Through this service, the company collected data on the number of installs of these specific apps following Mozilla’s ad campaigns.

[…]

The company’s actions may also result from previous complaints about the default enabling of ‘privacy-protecting ad metrics’ in Firefox. This option has been enabled by default since the July 9 release of Firefox 128.

The service collects data on how users respond to ads, which is shared aggregated with advertisers. Users can disable this option, however.

Mozilla says it regrets enabling such telemetry but defends the reason for turning it on by default. According to the browser provider, advertisers’ desire for information about the effectiveness of their campaigns is very difficult to escape.

[…]

Source: Mozilla removes telemetry service Adjust from mobile Firefox versions – Techzine Global

Oh dear. And I thought that Mozilla was the privacy friendly option. 2 strikes now.

Australian Regulators Decide To Write A Strongly Worded Letter About Clearview’s Privacy Law Violations, leave it at that

Clearview’s status as an international pariah really hasn’t changed much over the past few years. It may be generating fewer headlines, but nothing’s really changed about the way it does business.

Clearview has spent years scraping the web, compiling as much personal info as possible to couple with the billions of photos it has collected. It sells all of this to whoever wants to buy it. In the US, this means lots and lots of cop shops. Also, in the US, Clearview has mostly avoided running into a lot of legal trouble, other than a couple of lawsuits stemming from violations of certain states’ privacy laws.

Elsewhere in the world, it’s a different story. It has amassed millions in fines and plenty of orders to exit multiple countries immediately. These orders also mandate the removal of photos and other info gathered from accounts of these countries’ residents.

It doesn’t appear Clearview has complied with many of these orders, much less paid any of the fines. Clearview’s argument has always been that it’s a US company and, therefore, isn’t subject to rulings from foreign courts or mandates from foreign governments. It also appears Clearview might not be able to pay these fines if forced to, considering it’s now offering lawsuit plaintiffs shares in the company, rather than actual cash, to fulfill its settlement obligations.

Australia is one of several countries that claimed Clearview routinely violated privacy laws. Australia is also one of several that told Clearview to get out. Clearview’s response to the allegations and mandates delivered by Australian privacy regulators was the standard boilerplate: we don’t have offices in the Australia so we’re not going to comply with your demands.

Perhaps it’s this international stalemate that has prompted the latest bit of unfortunate news on the Clearview-Australia front. The Office of the Australian Information Commissioner (OAIC) has issued a statement that basically says it’s not going to waste any more time and money trying to get Clearview to respect Australia’s privacy laws. (h/t The Conversation)

Before giving up, the OAIC has this to say about its findings:

That determination found that Clearview AI, through its collection of facial images and biometric templates from individuals in Australia using a facial recognition technology, contravened the Privacy Act, and breached several Australian Privacy Principles (APPs) in Schedule 1 of the Act, including by collecting the sensitive information of individuals without consent in breach of APP 3.3 and failing to take reasonable steps to implement practices, procedures and systems to comply with the APPs.

Notably, the determination found that Clearview AI indiscriminately collected images of individuals’ faces from publicly available sources across the internet (including social media) to store in a database on the organisation’s servers. 

This was followed by the directive ordering Clearview to stop doing business in the country and delete any data it held pertaining to Australian residents. The statement notes Clearview’s only responses were a.) challenging the order in court in 2021 and b.) withdrawing entirely from the proceedings two years later. The OAIC notes that nothing appears to have changed in terms of how Clearview handles its collections. It also says it has no reason to believe Clearview has stopped collecting Australian persons’ data.

Despite all of that, it has decided to do absolutely nothing going forward:

Privacy Commissioner Carly Kind said, “I have given extensive consideration to the question of whether the OAIC should invest further resources in scrutinising the actions of Clearview AI, a company that has already been investigated by the OAIC and which has found itself the subject of regulatory investigations in at least three jurisdictions around the world as well as a class action in the United States. Considering all the relevant factors, I am not satisfied that further action is warranted in the particular case of Clearview AI at this time.

That’s disappointing. It makes it clear the company can avoid being held accountable for its legal violations by simply refusing to honor mandates issued by foreign countries or pay any fines levied. It can just continue to be the awful, ethically-horrendous company it has always been because, sooner or later, regulators are just going to give up and move on to softer targets.

[…]

Source: Australian Regulators Decide To Do Absolutely Nothing About Clearview’s Privacy Law Violations | Techdirt

Dutch officials fine Uber €290M for GDPR violations

Privacy authorities in the Netherlands have imposed a €290 million ($324 million) fine on ride-share giant Uber for sending driver data to servers in the United States – “a serious violation” of the EU’s General Data Protection Regulation (GDPR).

According to the Dutch Data Protection Authority (DPA), Uber spent years sending sensitive driver information from Europe to the US. Among the data that was transmitted were taxi licenses, location data, payment details, identity documents, and medical and criminal records. The data was sent abroad without the use of “transfer tools,” which the DPA said means the data wasn’t sufficiently protected.

“Businesses are usually obliged to take additional measures if they store personal data of Europeans outside the European Union,” Dutch DPA chairman Aleid Wolfsen said of the decision. “Uber did not meet the requirements of the GDPR to ensure the level of protection to the data with regard to transfers to the US. That is very serious.”

The Dutch DPA said that the investigation that led to the fine began after complaints from a group of more than 170 French Uber drivers who alleged their data was being sent to the US without adequate protection. Because Uber’s European operations are based in the Netherlands, enforcement for GDPR violations fell to the Dutch DPA.

Unfortunately for Uber, it already has an extensive history with the Dutch DPA, which has fined the outfit twice before.

The first came in 2018 when the authority fined Uber €600,000 for failing to report a data breach (a slugfest that several EU countries joined in on). The latter €10 million fine came earlier this year after Dutch officials determined Uber had failed to disclose data retention practices surrounding the data of EU drivers, refusing to name which countries data was sent to, and had obstructed its drivers’ right to privacy.

[…]

The uncertainty Uber refers to stems from the EU’s striking down of the EU-US Privacy Shield agreement and the years of efforts to replace it with a new rule that defines the safe transfer of personal data between the two regions.

Uber claims it’s done its job under the GDPR to safeguard data belonging to European citizens – it didn’t even need to make any data transfer process changes to comply the latest rules.

[…]

Source: Dutch officials fine Uber €290M for GDPR violations • The Register

Texas AG Latest To Sue GM For Covertly Selling Driver Data To Insurance Companies

Last year Mozilla released a report showcasing how the auto industry has some of the worst privacy practices of any tech industry in America (no small feat). Massive amounts of driver behavior is collected by your car, and even more is hoovered up from your smartphone every time you connect. This data isn’t secured, often isn’t encrypted, and is sold to a long list of dodgy, unregulated middlemen.

Last March the New York Times revealed that automakers like GM routinely sell access to driver behavior data to insurance companies, which then use that data to justify jacking up your rates. The practice isn’t clearly disclosed to consumers, and has resulted in 11 federal lawsuits in less than a month.

Now Texas AG Ken Paxton has belatedly joined the fun, filing suit (press release, complaint) in the state district court of Montgomery County against GM for “false, deceptive, and misleading business practices”:

“Companies are using invasive technology to violate the rights of our citizens in unthinkable ways. Millions of American drivers wanted to buy a car, not a comprehensive surveillance system that unlawfully records information about every drive they take and sells their data to any company willing to pay for it.”

Paxton notes that GM’s tracking impacted 1.8 million Texans and 14 million vehicles, few if any of whom understood they were signing up to be spied on by their vehicle. This is, amazingly enough, the first state lawsuit against an automaker for privacy violations, according to Politico.

The sales pitch for this kind of tracking and sales is that good drivers will be rewarded for more careful driving. But as publicly-traded companies, everybody in this chain — from insurance companies to automakers — are utterly financially desensitized from giving anybody a consistent break for good behavior. That’s just not how it’s going to work. Everybody pays more and more. Always.

But GM and other automakers’ primary problem is they weren’t telling consumers this kind of tracking was even happening in any clear, direct way. Usually it’s buried deep in an unread end user agreement for roadside assistant apps and related services. Those services usually involve a free trial, but the user agreement to data collection sticks around.

[…]

Source: Texas AG Latest To Sue GM For Covertly Selling Driver Data To Insurance Companies | Techdirt