Mozilla investigates 25 major car brands and finds privacy is shocking

[…]

The foundation, the Firefox browser maker’s netizen-rights org, assessed the privacy policies and practices of 25 automakers and found all failed its consumer privacy tests and thereby earned its Privacy Not Included (PNI) warning label.

If you care even a little about privacy, stay as far away from Nissan’s cars as you possibly can

In research published Tuesday, the org warned that manufacturers may collect and commercially exploit much more than location history, driving habits, in-car browser histories, and music preferences from today’s internet-connected vehicles. Instead, some makers may handle deeply personal data, such as – depending on the privacy policy – sexual activity, immigration status, race, facial expressions, weight, health, and even genetic information, the Mozilla team found.

Cars may collect at least some of that info about drivers and passengers using sensors, microphones, cameras, phones, and other devices people connect to their network-connected cars, according to Mozilla. And they collect even more info from car apps – such as Sirius XM or Google Maps – plus dealerships, and vehicle telematics.

Some car brands may then share or sell this information to third parties. Mozilla found 21 of the 25 automakers it considered say they may share customer info with service providers, data brokers, and the like, and 19 of the 25 say they can sell personal data.

More than half (56 percent) also say they share customer information with the government or law enforcement in response to a “request.” This isn’t necessarily a court-ordered warrant, and can also be a more informal request.

And some – like Nissan – may also use this private data to develop customer profiles that describe drivers’ “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.”

Yes, you read that correctly. According to Mozilla’s privacy researchers, Nissan says it can infer how smart you are, then sell that assessment to third parties.

[…]

Nissan isn’t the only brand to collect information that seems completely irrelevant to the vehicle itself or the driver’s transportation habits.

Kia mentions sex life,” Caltrider said. “General Motors and Ford both mentioned race and sexual orientation. Hyundai said that they could share data with government and law enforcement based on formal or informal requests. Car companies can collect even more information than reproductive health apps in a lot of ways.”

[…]

the Privacy Not Included team contacted Nissan and all of the other brands listed in the research: that’s Lincoln, Mercedes-Benz, Acura, Buick, GMC, Cadillac, Fiat, Jeep, Chrysler, BMW, Subaru, Dacia, Hyundai, Dodge, Lexus, Chevrolet, Tesla, Ford, Honda, Kia, Audi, Volkswagen, Toyota and Renault.

Only three – Mercedes-Benz, Honda, and Ford – responded, we’re told.

“Mercedes-Benz did answer a few of our questions, which we appreciate,” Caltrider said. “Honda pointed us continually to their public privacy documentation to answer your questions, but they didn’t clarify anything. And Ford said they discussed our request internally and made the decision not to participate.”

This makes Mercedes’ response to The Register a little puzzling. “We are committed to using data responsibly,” a spokesperson told us. “We have not received or reviewed the study you are referring to yet and therefore decline to comment to this specifically.”

A spokesperson for the four Fiat-Chrysler-owned brands (Fiat, Chrysler, Jeep, and Dodge) told us: “We are reviewing accordingly. Data privacy is a key consideration as we continually seek to serve our customers better.”

[…]

The Mozilla Foundation also called out consent as an issue some automakers have placed in a blind spot.

“I call this out in the Subaru review, but it’s not limited to Subaru: it’s the idea that anybody that is a user of the services of a connected car, anybody that’s in a car that uses services is considered a user, and any user is considered to have consented to the privacy policy,” Caltrider said.

Opting out of data collection is another concern.

Tesla, for example, appears to give users the choice between protecting their data or protecting their car. Its privacy policy does allow users to opt out of data collection but, as Mozilla points out, Tesla warns customers: “If you choose to opt out of vehicle data collection (with the exception of in-car Data Sharing preferences), we will not be able to know or notify you of issues applicable to your vehicle in real time. This may result in your vehicle suffering from reduced functionality, serious damage, or inoperability.”

While technically this does give users a choice, it also essentially says if you opt out, “your car might become inoperable and not work,” Caltrider said. “Well, that’s not much of a choice.”

[…]

Source: Mozilla flunks 25 major car brands for data privacy fails • The Register

Australian Government, Of All Places, Says Age Verification Is A Privacy & Security Nightmare

In the past I’ve sometimes described Australia as the land where internet policy is completely upside down. Rather than having a system that protects intermediaries from liability for third party content, Australia went the opposite direction. Rather than recognizing that a search engine merely links to content and isn’t responsible for the content at those links, Australia has said that search engines can be held liable for what they link to. Rather than protect the free expression of people on the internet who criticize the rich and powerful, Australia has extremely problematic defamation laws that result in regular SLAPP suits and suppression of speech. Rather than embrace encryption that protects everyone’s privacy and security, Australia requires companies to break encryption, insisting only criminals use it.

It’s basically been “bad internet policy central,” or the place where good internet policy goes to die.

And, yet, there are some lines that even Australia won’t cross. Specifically, the Australian eSafety commission says that it will not require adult websites to use age verification tools, because it would put the privacy and security of Australians’ data at risk. (For unclear reasons, the Guardian does not provide the underlying documents, so we’re fixing that and providing both the original roadmap and the Australian government’s response

[…]

Of course, in France, the Data Protection authority released a paper similarly noting that age verification was a privacy and security nightmare… and the French government just went right on mandating the use of the technology. In Australia, the eSafety Commission pointed to the French concerns as a reason not to rush into the tech, meaning that Australia took the lessons from French data protection experts more seriously than the French government did.

And, of course, here in the US, the Congressional Research Service similarly found serious problems with age verification technology, but it hasn’t stopped Congress from releasing a whole bunch of “save the children” bills that are built on a foundation of age verification.

[…]

Source: Australian Government, Of All Places, Says Age Verification Is A Privacy & Security Nightmare | Techdirt

Companies are recording your conversations whilst you are on hold with them

Is Achmea or Bol.com customer service putting you on hold? Then everything you say can still be heard by some of their employees. This is evident from research by Radar.

When you call customer service, you often hear: “Please note: this conversation may be recorded for training purposes.” Nothing special. But if you call the insurer Zilveren Kruis, you will also hear: “Note: Even if you are on hold, our quality employees can hear what you are saying.”

Striking, because the Dutch Data Protection Authority states that recording customers ‘on hold’ is not allowed. Companies are allowed to record the conversation, for example to conclude a contract or to improve the service.

Both mortgage provider Woonfonds and insurers Zilveren Kruis, De Friesland and Interpolis confirm that the recording tape continues to run if you are on hold with them, while this violates privacy rules.

Bol.com also continues to eavesdrop on you while you are on hold, the webshop confirms. She also gives the same reason for this: “It is technically not possible to temporarily stop the recording and start it again when the conversation starts again.”KLM, Ziggo, Eneco, Vattenfall, T-Mobile, Nationale Nederlanden, ASR, ING and Rabobank say they don’t answer their customers while they are on hold.

Source: Diverse bedrijven waaronder bol.com nemen gesprekken ‘in de wacht’ op – Emerce

China floats rules for facial recognition technology – they are good and be great if the govt was bound by them too!

China has released draft regulations to govern the country’s facial recognition technology that include prohibitions on its use to analyze race or ethnicity.

According to the the Cyberspace Administration of China(CAC), the purpose is to “regulate the application of face recognition technology, protect the rights and interests of personal information and other personal and property rights, and maintain social order and public safety” as outlined by a smattering of data security, personal information, and network laws.

The draft rules, which are open for comments until September 7, include some vague directives not to use face recognition technology to disrupt social order, endanger national security, or infringe on the rights of individuals and organizations.

The rules also state that facial recognition tech must be used only when there is a specific purpose and sufficient necessity, strict protection measures are taken, and only when non-biometric measures won’t do.

It makes requirements to obtain consent before processing face information, except for cases where it’s not required, which The Reg assumes means for individuals such as prisoners and in instances of national security. Parental or guardian consent is needed for those under the age of 14.

Building managers can’t require its use to enter and exit property – they must provide alternative measures of verifying a personal identity for those who want it.

It also can’t be leaned into for “major personal interests” such as social assistance and real estate disposal. For that, manual verification of personal identity must be used with facial recognition used only as an auxiliary means of verifying personal identity.

And collecting images for internal management should only be done in a reasonably sized area.

In businesses like hotels, banks, airports, art galleries, and more, the tech should not be used to verify personal identity. If the individual chooses to link their identity to the image, they should be informed either verbally or in writing and provide consent.

Collecting images is also not allowed in private spaces like hotel rooms, public bathrooms, and changing rooms.

Furthermore, those using facial surveillance techniques must display reminder signs, and personal images along with identification information must also be kept confidential, and only anonymized data may be saved.

Under the draft regs, those that store face information of more than 10,000 people must register with a local branch of the CAC within 30 working days.

Most interesting, however, is Article 11, which, when translated from Chinese via automated tools, reads:

No organization or individual shall use face recognition technology to analyze personal race, ethnicity, religion, sensitive personal information such as beliefs, health status, social class, etc.

The CAC does not say if the Chinese Communist Party counts as an “organization.”

Human rights groups have credibly asserted that Uyghurs are routinely surveilled using facial recognition technology, in addition to being incarcerated, required to perform forced labor, re-educated to abandon their beliefs and cultural practices, and may even be subjected to sterilization campaigns.

Just last month, physical security monitoring org IPVM reported it came into possession of a contract between China-based Hikvision and Hainan Province’s Chengmai County for $6 million worth of cameras that could detect whether a person was ethnically Uyghur using minority recognition technology.

Hikvision denied the report and said it last provided such functionality in 2018.

Beyond facilitating identification of Uyghurs, it’s clear the cat is out of the bag when it comes to facial recognition technology in China by both government and businesses alike. Local police use it to track down criminals and its use feeds into China’s social credit system.

“‘Sky Net,’ a facial recognition system that can scan China’s population of about 1.4 billion people in a second, is being used in 16 Chinese cities and provinces to help police crackdown on criminals and improve security,” said state-sponsored media in 2018.

Regardless, the CAC said those violating the new draft rules once passed would be held to criminal and civil liability.

Source: China floats rules for facial recognition technology • The Register

Reddit Wins, Doesn’t Have to NARC on Users Who Discussed Torrenting

This weekend, a federal court tossed a subpoena in a case against the internet service provider Grande that would require Reddit to reveal the identities of anonymous users that torrent movies.

The case was originally filed in 2021 by 20 movie producers against Grande Communications in the Western District of Texas federal court. The lawsuit claims that Grande is committing copyright infringement against the producers for allegedly ignoring the torrenting of 45 of their movies that occurred on its networks. As part of the case, the plaintiffs attempted to subpoena Reddit for IP addresses and user data for accounts that openly discussed torrenting on the platform. This weekend, Magistrate Judge Laurel Beeler denied the subpoena—meaning Reddit is off the hook.

“The plaintiffs thus move to compel Reddit to produce the identities of its users who are the subject of the plaintiffs’ subpoena,” Magistrate Judge Beeler wrote in her decision. “The issue is whether that discovery is permissible despite the users’ right to speak anonymously under the First Amendment. The court denies the motion because the plaintiffs have not demonstrated a compelling need for the discovery that outweighs the users’ First Amendment right to anonymous speech.”

Reddit was previously cleared of a similar subpoena in a similar lawsuit by the same judge back in May as reported by ArsTechnica. Reddit was asked to unmask eight users who were active in piracy threads on the platform, but the social media website pulled the same First Amendment defense.

 

Source: Reddit Wins, Doesn’t Have to NARC on Users Who Discussed Torrenting

New privacy deal allows US tech giants to continue storing European user data on American servers

Nearly three years after a 2020 court decision threatened to grind transatlantic e-commerce to a halt, the European Union has adopted a plan that will allow US tech giants to continue storing data about European users on American soil. In a decision announced Monday, the European Commission approved the Trans-Atlantic Data Privacy Framework. Under the terms of the deal, the US will establish a court Europeans can engage with if they feel a US tech platform violated their data privacy rights. President Joe Biden announced the creation of the Data Protection Review Court in an executive order he signed last fall. The court can order the deletion of user data and impose other remedial measures. The framework also limits access to European user data by US intelligence agencies.

The Trans-Atlantic Data Privacy Framework is the latest chapter in a saga that is now more than a decade in the making. It was only earlier this year the EU fined Meta a record-breaking €1.2 billion after it found that Facebook’s practice of moving EU user data to US servers violated the bloc’s digital privacy laws. The EU also ordered Meta to delete the data it already had stored on its US servers if the company didn’t have a legal way to keep that information there by the fall. As The Wall Street Journal notes, Monday’s agreement should allow Meta to avoid the need to delete any data, but the company may end up still paying the fine.

Even with a new agreement in place, it probably won’t be smooth sailing just yet for the companies that depend the most on cross-border data flows. Max Schrems, the lawyer who successfully challenged the previous Safe Harbor and Privacy Shield agreements that governed transatlantic data transfers before today, told The Journal he plans to challenge the new framework. “We would need changes in US surveillance law to make this work and we simply don’t have it,” he said. For what it’s worth, the European Commission says it’s confident it can defend its new framework in court.

Source: New privacy deal allows US tech giants to continue storing European user data on American servers | Engadget

Another problem is that the US side is not enshrined in law, but in a presidential decree, which can be revoked at any time.

Google Says It’ll Scrape Everything You Post Online for AI

Google updated its privacy policy over the weekend, explicitly saying the company reserves the right to scrape just about everything you post online to build its AI tools. If Google can read your words, assume they belong to the company now, and expect that they’re nesting somewhere in the bowels of a chatbot.

“Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public,” the new Google policy says. “For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”

Fortunately for history fans, Google maintains a history of changes to its terms of service. The new language amends an existing policy, spelling out new ways your online musings might be used for the tech giant’s AI tools work.

[…]

This is an unusual clause for a privacy policy. Typically, these policies describe ways that a business uses the information that you post on the company’s own services. Here, it seems Google reserves the right to harvest and harness data posted on any part of the public web, as if the whole internet is the company’s own AI playground. Google did not immediately respond to a request for comment.

[…]

Source: Google Says It’ll Scrape Everything You Post Online for AI

The rest of the article goes into Gizomodo’s luddite War Against AI ™ luddite language, unfortunately, because it misses the point that basically this is nothing much new – Google has been able to use any information you type into any of their products for pretty much any purpose (eg advertising, email scanning, etc) for decades (which is why I don’t use Chrome). However it is something that most people simply don’t realise.

Sacramento Sheriff is sharing license plate reader data with anti-abortion states, records show

In 2015, Democratic Elk Grove Assemblyman Jim Cooper voted for Senate Bill 34, which restricted law enforcement from sharing automated license plate reader (ALPR) data with out-of-state authorities. In 2023, now-Sacramento County Sheriff Cooper appears to be doing just that.

The Electronic Frontier Foundation (EFF) a digital rights group, has sent Cooper a letter requesting that the Sacramento County Sheriff’s Office cease sharing ALPR data with out-of-state agencies that could use it to prosecute someone for seeking an abortion.

According to documents that the Sheriff’s Office provided EFF through a public records request, it has shared license plate reader data with law enforcement agencies in states that have passed laws banning abortion, including Alabama, Oklahoma and Texas.

[…]

Schwartz said that a sheriff in Texas, Idaho or any other state with an abortion ban on the books could use that data to track people’s movements around California, knowing where they live, where they work and where they seek reproductive medical care, including abortions.

The Sacramento County Sheriff’s Office isn’t the only one sharing that data; in May, EFF released a report showing that 71 law enforcement agencies in 22 California counties — including Sacramento County — were sharing such data. The practice is in violation of a 2015 law that states “a (California law enforcement) agency shall not sell, share, or transfer ALPR information, except to another (California law enforcement) agency, and only as otherwise permitted by law.”

[…]

 

Source: Sacramento Sheriff is sharing license plate reader data with anti-abortion states, records show

France Allows Police to Remotely Turn On GPS, Camera, Audio on Phones

Amidst ongoing protests in France, the country has just passed a new bill that will allow police to remotely access suspects’ cameras, microphones, and GPS on cell phones and other devices.

As reported by Le Monde, the bill has been criticized by the French people as a “snoopers” charter that allows police unfettered access to the location of its citizens. Moreover, police can activate cameras and microphones to take video and audio recordings of suspects. The bill will reportedly only apply to suspects in crimes that are punishable by a minimum of five years in jail

[…]

French politicians added an amendment that orders judge approval for any surveillance conducted under the scope of the bill and limits the duration of surveillance to six months

[…]

In 2021, The New York Times reported that the French Parliament passed a bill that would expand the French police force’s ability to monitor civilians using drones. French President Emmanuel Macron argued at the time that the bill was meant to protect police officers from increasingly violent protestors.

[…]

 

Source: France Passes Bill Allowing Police to Remotely Access Phones

$6.3b US firm Telesign breached GDPR, reputation-scoring half of the population of the planet with mobiles

A US-based fraud prevention company is in hot water over allegations it not only collected data from millions of EU citizens and processed it using automated tools without their knowledge, but that it did so in the United States, all in violation of the EU’s data protection rules.

The complaint was filed by Austrian privacy advocacy group noyb, helmed by lawyer Max Schrems, and it doesn’t pull any punches in its claims that TeleSign, through its former Belgian parent company BICS, secretly collected data on cellphone users around the world.

That data, noyb alleges, was fed into an automated system that generates “reputation scores” that TeleSign sells to its customers, which includes TikTok, Salesforce, Microsoft and AWS, among others, for verifying the identity of a person behind a phone number and preventing fraud.

BICS, which acquired TeleSign in 2017, describes itself as “a global provider of international wholesale connectivity and interoperability services,” in essence operating as an interchange for various national cellular networks. Per noyb, BICS operates in more than 200 countries around the world and “gets detailed information (e.g. the regularity of completed calls, call duration, long-term inactivity, range activity, or successful incoming traffic) [on] about half of the worldwide mobile phone users.”

That data is regularly shared with TeleSign, noyb alleges, without any notification to the customers whose data is being collected and used.

[…]

In its complaint, an auto-translated English version of which was reviewed by The Register, noyb alleges that TeleSign is in violation of the GDPR’s provisions that ban use of automated profiling tools, as well as rules that require affirmative consent be given to process EU citizen’s data.

[…]

When BICS acquired TeleSign in 2017, it began to fall under the partial control of BICS’ parent company, Belgian telecom giant Proximus. Proximus held a partial stake in BICS, which Proximus spun off from its own operations in 1997.

In 2021, Proximus bought out BICS’ other shareholders, making it the sole owner of both the telecom interchange and TeleSign.

With that in mind, noyb is also leveling charges against Proximus and BICS. In its complaint, noyb said Proximus was asked by EU citizens from various countries to provide records of the data TeleSign processed, as is their right under Article 15 of the GDPR.

The complainants weren’t given the information they requested, says noyb, and claims what was handed over was simply a template copy of the EU’s standard contractual clause (SCC), which has been used by businesses transmitting data between the EU and US while the pair try to work out data transfer rules that Schrems won’t get struck down in court.

[…]

Noyb is seeking cessation of all data transfers from BICS to TeleSign, processing of said data, and is requesting deletion of all unlawfully transmitted data. It’s also asking for Belgian data protection authorities to fine Proximus, which noyb said could reach as high as €236 million ($257 million) – a mere 4 percent of Proximus’s global turnover.

[…]

Source: US firm ‘breached GDPR’ by reputation-scoring EU citizens • The Register

This firm is absolutely massive, yet it’s a smaller part of BICS and chances are that you’ve never ever heard of either of them!

Fitbit Privacy & security guide – no one told me it would send my data to the US

As of January 14, 2021, Google officially became the owner of Fitbit. That worried many privacy conscious users. However, Google promised that “Fitbit users’ health and wellness data won’t be used for Google ads and this data will be kept separate from other Google ad data” as part of the deal with global regulators when they bought Fitbit. This is good.

And Fitbit seems to do an OK job with privacy and security. It de-identifies the data it collects so it’s (hopefully) not personally identifiable. We say hopefully because, depending on the kind of data, it’s been found to be pretty easy to de-anonymize these data sets and track down an individual’s patterns, especially with location data. So, be aware with Fitbit—or any fitness tracker—you are strapping on a device that tracks your location, heart rate, sleep patterns, and more. That’s a lot of personal information gathered in one place.

What is not good is what can happen with all this very personal health data if others aren’t careful. A recent report showed that health data for over 61 million fitness tracker users, including both Fitbit and Apple, was exposed when a third-party company that allowed users to sync their health data from their fitness trackers did not secure the data properly. Personal information such as names, birthdates, weight, height, gender, and geographical location for Fitbit and other fitness-tracker users was left exposed because the company didn’t password protect or encrypt their database. This is a great reminder that yes, while Fitbit might do a good job with their own security, anytime you sync or share that data with anyone else, it could be vulnerable.

[…]

e Fitbit app does allow for period tracking though. And the app, like most wearable tracking apps, collects a whole bunch of person, body-related data that could potentially be used to tell if a user is pregnant.

Fortunately, Fitbit doesn’t sell this data but it does say it can share some personal data for interest-based advertising. Fitbit also can share your wellness data with other apps, insurers, and employers if you sign up for that and give your consent.

[…]

Fitbit isn’t the wearable we’d trust the most with our private reproductive health data. Apple, Garmin, Oura all make us feel a bit more comfortable with this personal information.

Source: Fitbit | Privacy & security guide | Mozilla Foundation

So when installing one it says it needs to process your data in the USA – which basically means it’s up for grabs for all and sundry. There is a reason the EU has the GDPR. But why does it need to send data anywhere other than your phone anyway?!

This is something that almost no-one mentions when you read the reviews on these things.

Amazon’s Ring used to spy on customers, children, FTC says in privacy settlement

A former employee of Amazon.com’s Ring doorbell camera unit spied for months on female customers in 2017 with cameras placed in bedrooms and bathrooms, the Federal Trade Commission said in a court filing on Wednesday when it announced a $5.8 million settlement with the company over privacy violations.

Amazon also agreed to pay $25 million to settle allegations it violated children’s privacy rights when it failed to delete Alexa recordings at the request of parents and kept them longer than necessary, according to a court filing in federal court in Seattle that outlined a separate settlement.

The FTC settlements are the agency’s latest effort to hold Big Tech accountable for policies critics say place profits from data collection ahead of privacy.

The FTC is also probing Amazon.com’s $1.7 billion deal to buy iRobot Corp (IRBT.O), which was announced in August 2022 in Amazon’s latest push into smart home devices, and has a separate antitrust probe underway into Amazon.

[…]

The FTC said Ring gave employees unrestricted access to customers’ sensitive video data: “As a result of this dangerously overbroad access and lax attitude toward privacy and security, employees and third-party contractors were able to view, download, and transfer customers’ sensitive video data.”

In one instance in 2017, an employee of Ring viewed videos made by at least 81 female customers and Ring employees using Ring products. “Undetected by Ring, the employee continued spying for months,” the FTC said.

[…]

In May 2018, an employee gave information about a customer’s recordings to the person’s ex-husband without consent, the complaint said. In another instance, an employee was found to have given Ring devices to people and then watched their videos without their knowledge, the FTC said.

[…]

rules against deceiving consumers who used Alexa. For example, the FTC complaint says that Amazon told users it would delete voice transcripts and location information upon request, but then failed to do so.

“The unlawfully retained voice recordings provided Amazon with a valuable database for training the Alexa algorithm to understand children, benefiting its bottom line at the expense of children’s privacy,” the FTC said.

Source: Amazon’s Ring used to spy on customers, FTC says in privacy settlement

The total settlement of $30m is insanely low considering the scale of the violations and the continuing nature of them.

Meta ordered to suspend Facebook EU data flows as it’s hit with record €1.2BN privacy fine under GDPR – 10 years and 3 court cases later

[…]

Today the European Data Protection Board (EDPB) announced that Meta has been fined €1.2 billion (close to $1.3 billion) — which the Board confirmed is the largest fine ever issued under the bloc’s General Data Protection Regulation (GDPR). (The prior record goes to Amazon which was stung for $887 million for misusing customers data for ad targeting back in 2021.)

Meta’s sanction is for breaching conditions set out in the pan-EU regulation governing transfers of personal data to so-called third countries (in this case the US) without ensuring adequate protections for people’s information.

European judges have previously found U.S. surveillance practices to conflict with EU privacy rights.

[…]

The decision emerging out of the Irish DPC flows from a complaint made against Facebook’s Irish subsidiary almost a decade ago, by privacy campaigner Max Schrems — who has been a vocal critic of Meta’s lead data protection regulator in the EU, accusing the Irish privacy regulator of taking an intentionally long and winding path in order to frustrate effective enforcement of the bloc’s rulebook.

On the substance of his complaint, Schrems argues that the only sure-fire way to fix the EU-U.S. data flows doom loop is for the U.S. to grasp the nettle and reform its surveillance practices.

Responding to today’s order in a statement (via his privacy rights not-for-profit, noyb), he said: “We are happy to see this decision after ten years of litigation. The fine could have been much higher, given that the maximum fine is more than 4 billion and Meta has knowingly broken the law to make a profit for ten years. Unless US surveillance laws get fixed, Meta will have to fundamentally restructure its systems.”

[… ]

This suggests the Irish regulator is routinely under-enforcing the GDPR on the most powerful digital platforms and doing so in a way that creates additional problems for efficient functioning of the regulation since it strings out the enforcement process. (In the Facebook data flows case, for example, objections were raised to the DPC’s draft decision last August — so it’s taken some nine months to get from that draft to a final decision and suspension order now.) And, well, if you string enforcement out for long enough you may allow enough time for the goalposts to be moved politically that enforcement never actually needs to happen. Which, while demonstrably convenient for data-mining tech giants like Meta, does make a mockery of citizens’ fundamental rights.

As noted above, with today’s decision, the DPC is actually implementing a binding decision taken by the EDPB last month in order to settle ongoing disagreement over Ireland’s draft decision — so much of the substance of what’s being ordered on Meta today comes, not from Dublin, but from the bloc’s supervisor body for privacy regulators.

[…]

n further public remarks today, Schrems once again hit out at the DPC’s approach — accusing the regulator of essentially working to thwart enforcement of the GDPR. “It took us ten years of litigation against the Irish DPC to get to this result. We had to bring three procedures against the DPC and risked millions of procedural costs. The Irish regulator has done everything to avoid this decision but was consistently overturned by the European Courts and institutions. It is kind of absurd that the record fine will go to Ireland — the EU Member State that did everything to ensure that this fine is not issued,” he said.

[…]

Earlier reports have suggested the European Commission could adopt the new EU-U.S. data deal in July, although it has declined to provide a date for this since it says multiple stakeholders are involved in the process.

Such a timeline would mean Meta gets a new escape hatch to avoid having to suspend Facebook’s service in the EU; and can keep relying on this high level mechanism so long as it is stands.

If that’s how the next section of this torturous complaint saga plays out it will mean that a case against Facebook’s illegal data transfers which dates back almost ten years at this point will, once again, be left twisting in the wind — raising questions about whether it’s really possible for Europeans to exercise legal rights set out in the GDPR? (And, indeed, whether deep-pocketed tech giants, whose ranks are packed with well-paid lawyers and lobbyists, can be regulated at all?)

[…]

Analysis on five years of the GDPR, put out earlier this month by the Irish Council for Civil Liberties (ICCL), dubs the enforcement situation a “crisis” — warning: “Europe’s failure to enforce the GDPR exposes everyone to acute hazard in the digital age and fingering Ireland’s DPA as a leading cause of enforcement failure against Big Tech.”

And the ICCL points the finger of blame squarely at Ireland’s DPC.

“Ireland continues to be the bottleneck of enforcement: It delivers few draft decisions on major cross-border cases, and when it does eventually do so other European enforcers routinely vote by majority to force it to take tougher enforcement action,” the report argues — before pointing out that: “Uniquely, 75% of Ireland’s GDPR investigation decisions in major EU cases were overruled by majority vote of its European counterparts at the EDPB, who demand tougher enforcement action.”

The ICCL also highlights that nearly all (87%) of cross-border GDPR complaints to Ireland repeatedly involve the same handful of Big Tech companies: Google, Meta (Facebook, Instagram, WhatsApp), Apple, TikTok, and Microsoft. But says many complaints against these tech giants never even get a full investigation — thereby depriving complaints of the ability to exercise their rights.

The analysis points out that the Irish DPC chooses “amicable resolution” to conclude the vast majority (83%) of cross-border complaints it receives (citing the oversight body’s own statistics) — further noting: “Using amicable resolution for repeat offenders, or for matters likely to impact many people, contravenes European Data Protection Board guidelines.”

[…]

The reality is a patchwork of problems frustrate effective enforcement across the bloc as you might expect with decentralized oversight structure which factors in linguistic and culture differences across 27 Member States and varying opinions on how best to approach oversight atop big (and very personal) concepts like privacy which may mean very different things to different people.

Schrems’ privacy rights not-for-profit, noyb, has been collating information on this patchwork of GDPR enforcement issues — which include things like under-resourcing of smaller agencies and a general lack of in-house expertise to deal with digital issues; transparency problems and information blackholes for complainants; cooperation issues and legal barriers frustrating cross-border complaints; and all sorts of ‘creative’ interpretations of complaints “handling” — meaning nothing being done about a complaint still remains a common outcome — to name just a few of the issues it’s encountered.

[…]

Source: Meta ordered to suspend Facebook EU data flows as it’s hit with record €1.2BN privacy fine under GDPR | TechCrunch

The article contains the history of the court cases Schrems had to enter to get the Ireland and the EU to do anything about data sharing problems – it’s an interesting read.

Online age verification is coming, and privacy is on the chopping block

A spate of child safety rules might make going online in a few years very different, and not just for kids. In 2022 and 2023, numerous states and countries are exploring age verification requirements for the internet, either as an implicit demand or a formal rule. The laws are positioned as a way to protect children on a dangerous internet. But the price of that protection might be high: nothing less than the privacy of, well, everyone.

Government agencies, private companies, and academic researchers have spent years seeking a way to solve the thorny question of how to check internet users’ ages without the risk of revealing intimate information about their online lives. But after all that time, privacy and civil liberties advocates still aren’t convinced the government is ready for the challenge.

“When you have so many proposals floating around, it’s hard to ensure that everything is constitutionally sound and actually effective for kids,” Cody Venzke, a senior policy counsel at the American Civil Liberties Union (ACLU), tells The Verge. “Because it’s so difficult to identify who’s a kid online, it’s going to prevent adults from accessing content online as well.”

In the US and abroad, lawmakers want to limit children’s access to two things: social networks and porn sites. Louisiana, Arkansas, and Utah have all passed laws that set rules for underage users on social media. Meanwhile, multiple US federal bills are on the table, and so are laws in other countries, like the UK’s Online Safety Bill. Some of these laws demand specific features from age verification tools. Others simply punish sites for letting anyone underage use them — a more subtle request for verification.

Online age verification isn’t a new concept. In the US, laws like the Children’s Online Privacy Protection Act (COPPA) already apply special rules to people under 13. And almost everyone who has used the internet — including major platforms like YouTube and Facebook — has checked a box to access adult content or entered a birth date to create an account. But there’s also almost nothing to stop them from faking it.

As a result, lawmakers are calling for more stringent verification methods. “From bullying and sex trafficking to addiction and explicit content, social media companies subject children and teens to a wide variety of content that can hurt them, emotionally and physically,” Senator Tom Cotton (R-AR), the backer of the Protect Kids Online Act, said. “Just as parents safeguard their kids from threats in the real world, they need the opportunity to protect their children online.”

Age verification systems fall into a handful of categories. The most common option is to rely on a third party that knows your identity — by directly validating a credit card or government-issued ID, for instance, or by signing up for a digital intermediary like Allpasstrust, the service Louisianans must use for porn access.

More experimentally, there are solutions that estimate a user’s age without an ID. One potential option, which is already used by Facebook and Instagram, would use a camera and facial recognition to guess whether you’re 18. Another, which is highlighted as a potential age verification solution by France’s National Commission on Informatics and Liberty (CNIL), would “guess” your age based on your online activity.

As pointed out by CNIL’s report on various online age verification options, all these methods have serious flaws. CNIL notes that identifying someone’s age with a credit card would be relatively easy since the security infrastructure is already there for online payments. But some adult users — especially those with lower incomes — may not have a card, which would seriously limit their ability to access online services. The same goes for verification methods using government-issued IDs. Children can also snap up a card that’s lying around the house to verify their age.

“As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on”

Similarly, the Congressional Research Service (CRS) has expressed concerns about online age verification. In a report it updated in March, the US legislature’s in-house research institute found that many kids aged 16 to 19 might not have a government-issued ID, such as a driver’s license, that they can use to verify their age online. While it says kids could use their student ID instead, it notes that they may be easier to fake than a government-issued ID. The CRS isn’t totally on board with relying on a national digital ID system for online age verification either, as it could “raise privacy and security concerns.”

Face-based age detection might seem like a quick fix to these concerns. And unlike a credit card — or full-fledged facial identification tools — it doesn’t necessarily tell a site who you are, just whether it thinks you’re over 18.

But these systems may not accurately identify the age of a person. Yoti, the facial analysis service used by Facebook and Instagram, claims it can estimate the age of people 13 to 17 years old as under 25 with 99.93 percent accuracy while identifying kids that are six to 11 years old as under 13 with 98.35 percent accuracy. This study doesn’t include any data on distinguishing between young teens and older ones, however — a crucial element for many young people.

Although Yoti claims its system has no “discernible bias across gender or skin tone,” previous research indicates that facial recognition services are less reliable for people of color, gender-nonconforming people, and people with facial differences or asymmetry. This would, again, unfairly block certain people from accessing the internet.

It also poses a host of privacy risks, as the companies that capture facial recognition data would need to ensure that this biometric data doesn’t get stolen by bad actors. UK civil liberties group Big Brother Watch argues that “face prints’ are as sensitive as fingerprints” and that “collecting biometric data of this scale inherently puts people’s privacy at risk.” CNIL points out that you could mitigate some risks by performing facial recognition locally on a user’s device — but that doesn’t solve the broader problems.

Inferring ages based on browsing history raises even more problems. This kind of inferential system has been implemented on platforms like Facebook and TikTok, both of which use AI to detect whether a user is under the age of 13 based on their activity on the platform. That includes scanning a user’s activity for “happy birthday” messages or comments that indicate they’re too young to have an account. But the system hasn’t been explored on a larger scale — where it could involve having an AI scan your entire browsing history and estimate your age based on your searches and the sites you interact with. That would amount to large-scale digital surveillance, and CNIL outright calls the system “intrusive.” It’s not even clear how well it would work.

In France, where lawmakers are working to restrict access to porn sites, CNIL worked with Ecole Polytechnique professor Olivier Blazy to develop a solution that attempts to minimize the amount of user information sent to a website. The proposed method involves using an ephemeral “token” that sends your browser or phone a “challenge” when accessing an age-restricted website. That challenge would then get relayed to a third party that can authenticate your age, like your bank, internet provider, or a digital ID service, which would issue its approval, allowing you to access the website.

The system’s goal is to make sure a user is old enough to access a service without revealing any personal details, either to the website they’re using or the companies and governments providing the ID check. The third party “only knows you are doing an age check but not for what,” Blazy explains to The Verge, and the website would not know which service verified your age nor any of the details from that transaction.

Blazy hopes this system can prevent very young children from accessing explicit content. But even with this complex solution, he acknowledges that users in France will be able to get around the method by using a virtual private network (VPN) to conceal their location. This is a problem that plagues nearly any location-specific verification system: as long as another government lets people access a site more easily, users can route their traffic through it. The only surefire solution would be draconian crackdowns on privacy tools that would dramatically compromise freedom online.

Some governments are trying to offer a variety of options and let users pick between them. A report from the European Parliament Think Tank, an in-house department that helps shape legislation, highlights an EU “browser-based interoperable age verification method” called euCONSENT, which will allow users to verify their identity online by choosing from a network of approved third-party services. Since this would give users the ability to choose the verification they want to use, this means one service might ask a user to upload an official government document, while another might rely on facial recognition.

To privacy and civil liberties advocates, none of these solutions are ideal. Venzke tells The Verge that implementing age verification systems encourages a system that collects our data and could pave the way for more surveillance in the future. “Bills that are trying to establish inferences about how old you are or who you are based on that already existing capitalistic surveillance, are just threatening to legitimize that surveillance,” Venzke says. “As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on.”

Age verification laws “are going to face a very tough battle in court”

The Electronic Frontier Foundation, a digital rights group, similarly argues that all age verification solutions are “surveillance systems” that will “lead us further towards an internet where our private data is collected and sold by default.”

Even some strong supporters of child safety bills have expressed concerns about making age verification part of them. Senator Richard Blumenthal (D-CT), one of the backers of the Kids Online Safety Act, objected to the idea in a call with reporters earlier this month. In a statement, he tells The Verge that “age verification would require either a national database or a goldmine of private information on millions of kids in Big Tech’s hands” and that “the potential for exploitation and misuse would be huge.” (Despite this, the EFF believes that KOSA’s requirements would inevitably result in age verification mandates anyway.)

In the US, it’s unclear whether online age verification would stand up under legal scrutiny at all. The US court system has already struck down efforts to implement online age verification several times in the past. As far back as 1997, the Supreme Court ruled parts of the 1996 Communications Decency Act unconstitutional, as it imposed restrictions on “knowing transmission of obscene or indecent messages” and required age verification online. More recently, a federal court found in 2016 that a Louisiana law, which required websites that publish “material harmful to minors” verify users’ ages, “creates a chilling effect on free speech.”

Vera Eidelman, a staff attorney with ACLU, tells The Verge that existing age verification laws “are going to face a very tough battle in court.” “For the most part, requiring content providers online to verify the ages of their users is almost certainly unconstitutional, given the likelihood but it will make people uncomfortable to exercise their rights to access certain information if they have to unmask or identify themselves,” Eidelman says.

But concerns over surveillance still haven’t stopped governments around the globe, including here in the US, from pushing ahead with online age verification mandates. There are currently several bills in the pipeline in Congress that are aimed at protecting children online, including the Protecting Kids on Social Media Act, which calls for the test of a national age verification system that would block users under the age of 13 from signing up for social media. In the UK, where the heavily delayed Online Safety Bill will likely become law, porn sites would be required to verify users’ ages, while other websites would be forced to give users the option to do so as well.

Some proponents of online safety laws say they’re no different than having to hand over an ID to purchase alcohol. “We have agreed as a society not to let a 15-year-old go to a bar or a strip club,” said Laurie Schlegel, the legislator behind Louisiana’s age restriction law, after its passage. “The same protections should be in place online.” But the comparison misses vastly different implications for free speech and privacy. “When we think about bars or ordering alcohol at a restaurant, we just assume that you can hand an ID to a bouncer or a waiter, they’ll hand it back, and that’s the end of it,” Venzke adds. “Problem is, there’s no infrastructure on the internet right now to [implement age verification] in a safe, secure, private way that doesn’t chill people’s ability to get to constitutionally protected speech.”

Most people also spend a relatively small amount of their time in real-world adults-only spaces, while social media and online communications tools are ubiquitous ways of finding information and staying in touch with friends and family. Even sites with sexually explicit content — the target of Louisiana’s bill — could be construed to include sites offering information about sexual health and LGBTQ resources, despite claims by lawmakers that this won’t happen.

Even if many of these rules are shot down, the way we use the internet may never be the same again. With age checks awaiting us online, some people may find themselves locked out of increasingly large numbers of platforms — leaving the online world more closed-off than ever.

Source: Online age verification is coming, and privacy is on the chopping block – The Verge

Google Will Require Android Apps to Make Account Deletion Easier

Right now, developers simply need to declare to Google that account deletion is somehow possible, but beginning next year, developers will have to make it easier to delete data through both their app and an online portal. Google specifies:

For apps that enable app account creation, developers will soon need to provide an option to initiate account and data deletion from within the app and online.

This means any app that lets you create an account to use it is required to allow you to delete that information when you’re done with it (or rather, request the developer delete the data from their servers). Although you can request that your data be deleted now, it usually requires manually contacting the developer to remove it. This new policy would mean developers have to offer a kill switch from the get-go rather than having Android users do the leg work.

The web deletion requirement is particularly new and must be “readily discoverable.” Developers must provide a link to a web form from the app’s Play Store landing page, with the idea being to let users delete account data even if they no longer have the app installed. Per the existing Android developer policy, all apps must declare how they collect and handle user data—Google introduced the policy in 2021 and made it mandatory last year. When you go into the Play Store and expand the “Data Safety” section under each app listing, developers list out data collection by criteria.

Simply removing an app from your Android device doesn’t completely scrub your data. Like software on a desktop operating system, files and folders are sometimes left behind from when the app was operating. This new policy will hopefully help you keep your data secure by wiping any unnecessary account info from the app developer’s servers, but also hopes to cut down on straggling data on your device. Conversely, you don’t have to delete your data if you think you’ll come to the app later. When it says you have a “choice,” Google wants to ensure it can point to something obvious.

It’s unclear how Google will determine if a developer follows the rules. It is up to the app developer to disclose whether user-specific app data is actually deleted. Earlier this year, Mozilla called out Google after discovering significant discrepancies between the top 20 most popular free apps’ internal privacy policies and those they listed in the Play Store.

https://gizmodo.com/google-android-delete-account-apps-request-uninstall-1850304540

Tesla Employees Have Been Meme-ing Your Private Car Videos

“We could see inside people’s garages and their private properties,” a former employee told Reuters. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.”

One office in particular, located in San Mateo, reportedly had a “free-wheeling” atmosphere, where employees would share videos and images with wild abandon. These pics or vids would often be “marked-up” via Adobe photoshop, former employees said, converting drivers’ personal experiences into memes that would circulate throughout the office.

“The people who buy the car, I don’t think they know that their privacy is, like, not respected,” one former employee was quoted as saying. “We could see them doing laundry and really intimate things. We could see their kids.”

Another former employee seemed to admit that all of this was very uncool: “It was a breach of privacy, to be honest. And I always joked that I would never buy a Tesla after seeing how they treated some of these people,” the employee told the news outlet. Yes, it’s always a vote of confidence when a company’s own employees won’t use the products that they sell.

Privacy concerns related to Tesla’s data-guzzling autos aren’t exactly new. Back in 2021, the Chinese government formally banned the vehicles on the premises of certain military installations, calling the company a “national security” threat. The Chinese were worried that the cars’ sensors and cameras could be used to funnel data out of China and back to the U.S. for the purposes of espionage. Beijing seems to have been on to something—although it might be the case that the spying threat comes less from America’s spooks than it does from bored slackers back at Tesla HQ.

One of the reasons that Tesla’s cameras seem so creepy is that you can never really tell if they’re on or not. A couple of years ago, a stationary Tesla helped catch a suspect in a Massachusetts hate crime, when its security system captured images of the man slashing tires in the parking lot of a predominantly Black church. The man was later arrested on the basis of the photos.

Reuters notes that it wasn’t ultimately “able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was.”

With all this in mind, you might as well always assume that your Tesla is watching, right? And, now that Reuters’ story has come out, you should also probably assume that some bored coder is also watchingpotentially in the hopes of converting your dopiest in-car moment into a meme.

https://gizmodo.com/tesla-elon-musk-car-camera-videos-employees-watching-1850307575

Wow, who knew? How surprising… not.

Tesla workers shared and memed sensitive images recorded by customer cars

Private camera recordings, captured by cars, were shared in chat rooms: ex-workers
Circulated clips included one of child being hit by car: ex-employees
Tesla says recordings made by vehicle cameras ‘remain anonymous’
One video showed submersible vehicle from James Bond film, owned by Elon Musk


LONDON/SAN FRANCISCO, April 6 (Reuters) – Tesla Inc assures its millions of electric car owners that their privacy “is and will always be enormously important to us.” The cameras it builds into vehicles to assist driving, it notes on its website, are “designed from the ground up to protect your privacy.”

But between 2019 and 2022, groups of Tesla employees privately shared via an internal messaging system sometimes highly invasive videos and images recorded by customers’ car cameras, according to interviews by Reuters with nine former employees.

Some of the recordings caught Tesla customers in embarrassing situations. One ex-employee described a video of a man approaching a vehicle completely naked.

Also shared: crashes and road-rage incidents. One crash video in 2021 showed a Tesla driving at high speed in a residential area hitting a child riding a bike, according to another ex-employee. The child flew in one direction, the bike in another. The video spread around a Tesla office in San Mateo, California, via private one-on-one chats, “like wildfire,” the ex-employee said.

Other images were more mundane, such as pictures of dogs and funny road signs that employees made into memes by embellishing them with amusing captions or commentary, before posting them in private group chats. While some postings were only shared between two employees, others could be seen by scores of them, according to several ex-employees.

Tesla states in its online “Customer Privacy Notice” that its “camera recordings remain anonymous and are not linked to you or your vehicle.” But seven former employees told Reuters the computer program they used at work could show the location of recordings – which potentially could reveal where a Tesla owner lived.

One ex-employee also said that some recordings appeared to have been made when cars were parked and turned off. Several years ago, Tesla would receive video recordings from its vehicles even when they were off, if owners gave consent. It has since stopped doing so.

“We could see inside people’s garages and their private properties,” said another former employee. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.”

Tesla didn’t respond to detailed questions sent to the company for this report.

About three years ago, some employees stumbled upon and shared a video of a unique submersible vehicle parked inside a garage, according to two people who viewed it. Nicknamed “Wet Nellie,” the white Lotus Esprit sub had been featured in the 1977 James Bond film, “The Spy Who Loved Me.”

The vehicle’s owner: Tesla Chief Executive Elon Musk, who had bought it for about $968,000 at an auction in 2013. It is not clear whether Musk was aware of the video or that it had been shared.

The submersible Lotus vehicle nicknamed “Wet Nellie” that featured in the 1977 James Bond film, “The Spy Who Loved Me,” and which Tesla chief executive Elon Musk purchased in 2013. Tim Scott ©2013 Courtesy of RM Sotheby’s
The submersible Lotus vehicle nicknamed “Wet Nellie” that featured in the 1977 James Bond film, “The Spy Who Loved Me,” and which Tesla chief executive Elon Musk purchased in 2013. Tim Scott ©2013 Courtesy of RM Sotheby’s
Musk didn’t respond to a request for comment.

To report this story, Reuters contacted more than 300 former Tesla employees who had worked at the company over the past nine years and were involved in developing its self-driving system. More than a dozen agreed to answer questions, all speaking on condition of anonymity.

Reuters wasn’t able to obtain any of the shared videos or images, which ex-employees said they hadn’t kept. The news agency also wasn’t able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was. Some former employees contacted said the only sharing they observed was for legitimate work purposes, such as seeking assistance from colleagues or supervisors.

https://www.reuters.com/technology/tesla-workers-shared-sensitive-images-recorded-by-customer-cars-2023-04-06/

Dashcam App is driving nazi informer wet dream, Sends Video of You Speeding and other infractions Directly to Police

Speed cameras have been around for a long time and so have dash cams. The uniquely devious idea of combining the two into a traffic hall monitor’s dream device was not a potential reality until recently, though. According to the British Royal Automobile Club, such a combination is coming soon. The app, which is reportedly available in the U.K. as soon as May, will allow drivers to report each other directly to the police with video evidence for things like running red lights, failure to use a blinker, distracted driving, and yes, speeding.

Its founder Oleksiy Afonin recently held meetings with police to discuss how it would work. In a nutshell, video evidence of a crime could be uploaded as soon as the driver who captured it stopped their vehicle to do so safely. According to the RAC, the footage could then be “submitted to the police through an official video portal in less than a minute.” Police reportedly were open to the idea of using the videos as evidence in court.

The RAC questioned whether such an app could be distracting. It certainly opens up a whole new world of crime reporting. In some cities, individuals can report poorly or illegally parked cars to traffic police. Drivers getting into the habit of reporting each other for speeding might be a slippery slope, though. The government would be happy to collect the ticket revenue but the number of citations for alleged speeding could be off the charts with such a system. Anybody can download the app and report someone else, but the evidence would need to be reviewed.

The app, called dashcamUK, will only be available in the United Kingdom, as its name indicates. Thankfully, it doesn’t seem like there are any plans to bring it Stateside. Considering the British public is far more open to the use of CCTV cameras in terms of recording crimes than Americans are, it will likely stay that way for that reason, among others.

Source: Strangers Can Send Video of You Speeding Directly to Police With Dashcam App

TSA Confirms Biometric Scanning Soon Won’t Be Optional Even For Domestic Travelers

[…]

In 2017, the DHS began quietly rolling out its facial recognition program, starting with international airports and aimed mainly at collecting/scanning people boarding international flights. Even in its infancy, the DHS was hinting this was never going to remain solely an international affair.

It made its domestic desires official shortly thereafter, with the TSA dropping its domestic surveillance “roadmap” which now included “expanding biometrics to additional domestic travelers.” Then the DHS and TSA ran silent for a bit, resurfacing in late 2022 with the news it was rolling out its facial recognition system at 16 domestic airports.

As of January, the DHS and TSA were still claiming this biometric ID verification system was strictly opt-in. A TSA rep interviewed by the Washington Post, however, hinted that opting out just meant subjecting yourself to the worst in TSA customer service. Given the options, more travelers would obviously prefer a less brusque/hands-y trip through security checkpoints, ensuring healthy participation in the TSA’s “optional” facial recognition program.

A little more than two months have passed, and the TSA is now informing domestic travelers there will soon be no way to opt out of its biometric program. (via Papers Please)

Speaking at an aviation security panel at South by Southwest, TSA Administrator David Pekoske made these comments:

“We’re upgrading our camera systems all the time, upgrading our lighting systems,” Pekoske said. “(We’re) upgrading our algorithms, so that we are using the very most advanced algorithms and technology we possibly can.”

He said passengers can also choose to opt out of certain screening processes if they are uncomfortable, for now. Eventually, biometrics won’t be optional, he said.

[…]

Pekoske buries the problematic aspects of biometric harvesting in exchange for domestic travel “privileges” by claiming this is all about making things better for passengers.

“It’s critically important that this system has as little friction as it possibly can, while we provide for safety and security,” Pekoske said.

Yes, you’ll get through screening a little faster. Unless the AI is wrong, in which case you’ll be dealing with a whole bunch of new problems most agents likely won’t have the expertise to handle.

[…]

More travelers. Fewer agents. And a whole bunch of screens to interact with. That’s the plan for the nation’s airports and everyone who passes through them.

Source: TSA Confirms Biometric Scanning Soon Won’t Be Optional Even For Domestic Travelers | Techdirt

And way more data that hackers can get their hands on and which the government and people who buy the data can use for 1984 type purposes.

SCOPE Europe becomes the accredited monitoring body for a Dutch national data protection code of conduct

[…]SCOPE Europe is now accredited by the Dutch Data Protection Authority as the monitoring body of the Data Pro Code. On this occasion, SCOPE Europe celebrates its success in obtaining its second accreditation and looks forward to continuing its work on fostering trust in the digital economy.

When we were approached by NLdigital, the creators of the Data Pro Code, we knew that taking on the monitoring of a national code of conduct would be an exciting endeavor. As the first-ever accredited monitoring body for a transnational GDPR code of conduct, SCOPE Europe has built unique expertise in the field and are proud, to further apply in the context of another co-regulatory initiative.

The Code puts forward an accessible compliance framework for companies of all sizes, including micro, small and medium enterprises in the Netherlands. With the approval and now the accreditation of its monitoring body, the Data Pro Code will enable data processors to demonstrate GDPR compliance and boost transparency within the digital industry.

Source: PRESS RELEASE: SCOPE Europe becomes the accredited monitoring body for a Dutch national code of conduct: SCOPE Europe bvba/sprl

Anker Eufy security cam ‘stored unique ID’ of everyone filmed in the cloud for other cameras to identify – and for anyone to watch

A lawsuit filed against eufy security cam maker Anker Tech claims the biz assigns “unique identifiers” to the faces of any person who walks in front of its devices – and then stores that data in the cloud, “essentially logging the locations of unsuspecting individuals” when they stroll past.

[…]

All three suits allege Anker falsely represented that its security cameras stored all data locally and did not upload that data to the cloud.

Moore went public with his claims in November last year, alleging video and audio captured by Anker’s eufy security cams could be streamed and watched by any stranger using VLC media player, […]

In a YouTube video, the complaint details, Moore allegedly showed how the “supposedly ‘private,’ ‘stored locally’, ‘transmitted only to you’ doorbell is streaming to the cloud – without cloud storage enabled.”

He claimed the devices were uploading video thumbnails and facial recognition data to Anker’s cloud server, despite his never opting into Anker’s cloud services and said he’d found a separate camera tied to a different account could identify his face with the same unique ID.

The security researcher alleged at the time this showed that Anker was not only storing facial-recog data in the cloud, but also “sharing that back-end information between accounts” lawyers for the two other, near-identical lawsuits claim.

[…]

According to the complaint [PDF], eufy’s security cameras are marketed as “private” and as “local storage only” as a direct alternative to Anker’s competitors that require the use of cloud storage.

Desai’s complaint goes on to claim:

Not only does Anker not keep consumers’ information private, it was further revealed that Anker was uploading facial recognition data and biometrics to its Amazon Web Services cloud without encryption.

In fact, Anker has been storing its customers’ data alongside a specific username and other identifiable information on its AWS cloud servers even when its “eufy” app reflects the data has been deleted. …. Further, even when using a different camera, different username, and even a different HomeBase to “store” the footage locally, Anker is still tagging and linking a user’s facial ID to their picture across its camera platform. Meaning, once recorded on one eufy Security Camera, those same individuals are recognized via their biometrics on other eufy Security Cameras.

In an unrelated incident in 2021, a “software bug” in some of the brand’s 1080p Wi-Fi-connected Eufycams cams sent feeds from some users’ homes to other Eufycam customers, some of whom were in other countries at the time.

[…]

Source: Eufy security cam ‘stored unique ID’ of everyone filmed • The Register

Telehealth startup Cerebral shared millions of patients’ data with advertisers since 2019

Cerebral has revealed it shared the private health information, including mental health assessments, of more than 3.1 million patients in the United States with advertisers and social media giants like Facebook, Google and TikTok.

The telehealth startup, which exploded in popularity during the COVID-19 pandemic after rolling lockdowns and a surge in online-only virtual health services, disclosed the security lapse [This is no security lapse! This is blatant greed served by peddling people’s personal information!] in a filing with the federal government that it shared patients’ personal and health information who used the app to search for therapy or other mental health care services.

Cerebral said that it collected and shared names, phone numbers, email addresses, dates of birth, IP addresses and other demographics, as well as data collected from Cerebral’s online mental health self-assessment, which may have also included the services that the patient selected, assessment responses and other associated health information.

The full disclosure follows:

If an individual created a Cerebral account, the information disclosed may have included name, phone number, email address, date of birth, IP address, Cerebral client ID number, and other demographic or information. If, in addition to creating a Cerebral account, an individual also completed any portion of Cerebral’s online mental health self-assessment, the information disclosed may also have included the service the individual selected, assessment responses, and certain associated health information.

If, in addition to creating a Cerebral account and completing Cerebral’s online mental health self-assessment, an individual also purchased a subscription plan from Cerebral, the information disclosed may also have included subscription plan type, appointment dates and other booking information, treatment, and other clinical information, health insurance/pharmacy benefit information (for example, plan name and group/member numbers), and insurance co-pay amount.

Cerebral was sharing patients’ data with tech giants in real-time by way of trackers and other data-collecting code that the startup embedded within its apps. Tech companies and advertisers, like Google, Facebook and TikTok, allow developers to include snippets of their custom-built code, which allows the developers to share information about their app users’ activity with the tech giants, often under the guise of analytics but also for advertising.

But users often have no idea that they are opting-in to this tracking simply by accepting the app’s terms of use and privacy policies, which many people don’t read.

Cerebral said in its notice to customers — buried at the bottom of its website — that the data collection and sharing has been going on since October 2019 when the startup was founded. The startup said it has removed the tracking code from its apps. While not mentioned, the tech giants are under no obligations to delete the data that Cerebral shared with them.

Because of how Cerebral handles confidential patient data, it’s covered under the U.S. health privacy law known as HIPAA. According to a list of health-related security lapses under investigation by the U.S. Department of Health and Human Services, which oversees and enforces HIPAA, Cerebral’s data lapse is the second-largest breach of health data in 2023.

News of Cerebral’s years-long data lapse comes just weeks after the U.S. Federal Trade Commission slapped GoodRx with a $1.5 million fine and ordered it to stop sharing patients’ health data with advertisers, and BetterHelp was ordered to pay customers $8.5 million for mishandling users’ data.

If you were wondering why startups today should terrify you, Cerebral is just the latest example.

Source: Telehealth startup Cerebral shared millions of patients’ data with advertisers | TechCrunch

 

Signal says it will shut down in UK over Online Safety Bill, which wants to install spyware on all your devices

[…]

The Online Safety Bill contemplates bypassing encryption using device-side scanning to protect children from harmful material, and coincidentally breaking the security of end-to-end encryption at the same time. It’s currently being considered in Parliament and has been the subject of controversy for months.

[ something something saving children – that’s always a bad sign when they trot that one out ]

The legislation contains what critics have called “a spy clause.” [PDF] It requires companies to remove child sexual exploitation and abuse (CSEA) material or terrorist content from online platforms “whether communicated publicly or privately.” As applied to encrypted messaging, that means either encryption must be removed to allow content scanning or scanning must occur prior to encryption.

Signal draws the line

Such schemes have been condemned by technical experts and Signal is similarly unenthusiastic.

“Signal is a nonprofit whose sole mission is to provide a truly private means of digital communication to anyone, anywhere in the world,” said Meredith Whittaker, president of the Signal Foundation, in a statement provided to The Register.

“Many millions of people globally rely on us to provide a safe and secure messaging service to conduct journalism, express dissent, voice intimate or vulnerable thoughts, and otherwise speak to those they want to be heard by without surveillance from tech corporations and governments.”

“We have never, and will never, break our commitment to the people who use and trust Signal. And this means that we would absolutely choose to cease operating in a given region if the alternative meant undermining our privacy commitments to those who rely on us.”

Asked whether she was concerned that Signal could be banned under the Online Safety rules, Whittaker told The Register, “We were responding to a hypothetical, and we’re not going to speculate on probabilities. The language in the bill as it stands is deeply troubling, particularly the mandate for proactive surveillance of all images and texts. If we were given a choice between kneecapping our privacy guarantees by implementing such mass surveillance, or ceasing operations in the UK, we would cease operations.”

[…]

“If Signal withdraws its services from the UK, it will particularly harm journalists, campaigners and activists who rely on end-to-end encryption to communicate safely.”

[…]

 

Source: Signal says it will shut down in UK over Online Safety Bill

Google’s Play Store Privacy Labels Are a ‘Total Failure:’ Study

[…]

“There are two main problems here,” Mozilla’s Caltrider said. “The first problem is Google only requires the information in labels to be self-reported. So, fingers crossed, because it’s the honor system, and it turns out that most labels seem to be misleading.”

Google promises to make apps fix problems it finds in the labels, and threatens to ban apps that don’t get in compliance. But the company has never provided any details about how it polices apps. Google said it’s vigilant about enforcement but didn’t give any details about its enforcement process, and didn’t respond to a question about any enforcement actions it’s taken in the past.

[…]

Of course, Google could just read the privacy policies where apps spell out these practices, like Mozilla did, but there’s a bigger issue at play. These apps may not even be breaking Google’s privacy label rules, because those rules are so relaxed that “they let companies lie,” Caltrider said.

“That’s the second problem. Google’s own rules for what data practices you have to disclose are a joke,” Caltrider said. “The guidelines for the labels make them useless.”

If you go looking at Google’s rules for the data safety labels, which are buried deep in a cascading series of help menus, you’ll learn that there is a long list of things that you don’t have to tell your users about. In other words, you can say you don’t collect data or share it with third parties, while you do in fact collect data and share it with third parties.

For example, apps don’t have to disclose data sharing it if they have “consent” to share the data from users, or if they’re sharing the data with “service providers,” or if the data is “anonymized” (which is nonsense), or if the data is being shared for “specific legal purposes.” There are similar exceptions for what counts as data collection. Those loopholes are so big you could fill up a truck with data and drive it right on through.

[…]

Source: Google’s Play Store Privacy Labels Are a ‘Total Failure:’ Study

Which goes to show again, walled garden app stores really are no better than just downloading stuff from the internet, unless you’re the owner of the walled garden and collect 30% revenue for doing basically not much.

MetaGuard: Going Incognito in the Metaverse

[…]

with numerous recent studies showing the ease at which VR users can be profiled, deanonymized, and data harvested, metaverse platforms carry all the privacy risks of the current internet and more while at present having none of the defensive privacy tools we are accustomed to using on the web. To remedy this, we present the first known method of implementing an “incognito mode” for VR. Our technique leverages local ε-differential privacy to quantifiably obscure sensitive user data attributes, with a focus on intelligently adding noise when and where it is needed most to maximize privacy while minimizing usability impact. Moreover, our system is capable of flexibly adapting to the unique needs of each metaverse application to further optimize this trade-off. We implement our solution as a universal Unity (C#) plugin that we then evaluate using several popular VR applications. Upon faithfully replicating the most well known VR privacy attack studies, we show a significant degradation of attacker capabilities when using our proposed solution.

[…]

Source: MetaGuard: Going Incognito in the Metaverse | Berkeley RDI