Singapore plans to scan your face instead of your passport

[…] “Singapore will be one of the first few countries in the world to introduce automated, passport-free immigration clearance,” said minister for communications and information Josephine Teo in a wrap-up speech for the bill. Teo did concede that Dubai had such clearance for select enrolled travelers, but there was no assurance of other countries planning similar actions.

[…]

Another consideration for why passports will likely remain relevant in Singapore airports is for checking in with airlines. Airlines check passports not just to confirm identity, but also visas and more. Airlines are often held responsible for stranded passengers so will likely be required to confirm travelers have the documentation required to enter their destination.

The Register asked Singapore Airlines to confirm if passports will still be required on the airline after the implementation of biometric clearance. They deferred to Changi’s operator, Changi Airport Group (CAG), which The Reg also contacted – and we will update if a relevant reply arises.

What travelers will see is an expansion of a program already taking form. Changi airport currently uses facial recognition software and automated clearance for some parts of immigration.

[…]

Passengers who pre-submit required declarations online can already get through Singapore’s current automated immigration lanes in 20 to 30 seconds once they arrive to the front of the queue. It’s one reason Changi has a reputation for being quick to navigate.

[…]

According to CAG, the airport handled 5.12 million passenger movements in June 2023 alone. This figure is expected to only increase as it currently stands at 88 percent of pre-COVID levels and the government sees such efficiency as critical to managing the impending growth.

But the reasoning for biometric clearance go beyond a boom in travelers. With an aging population and shrinking workforce, Singapore’s Immigration & Checkpoints Authority (ICA) will have “to cope without a significant increase in manpower,” said Teo.

Additionally, security threats including pandemics and terrorism call for Singapore to “go upstream” on immigration measures, “such as the collection of advance passenger and crew information, and entry restrictions to be imposed on undesirable foreigners, even before they arrive at our shores,” added the minister.

This collection and sharing of biometric information is what enables the passport-free immigration process – passenger and crew information will need to be disclosed to the airport operator to use for bag management, access control, gate boarding, duty-free purchases, as well as tracing individuals within the airport for security purposes.

The shared biometrics will serve as a “single token of authentication” across all touch points.

Members of Singapore’s parliament have raised concerns about shifting to universal automated clearance, including data privacy, and managing technical glitches.

According to Teo, only Singaporean companies will be allowed ICA-related IT contracts, vendors will be given non-disclosure agreements, and employees of such firms must undergo security screening. Traveler data will be encrypted and transported through data exchange gateways.

As for who will protect the data, that role goes to CAG, with ICA auditing its compliance.

In case of disruptions that can’t be handled by an uninterruptible power supply, off-duty officers will be called in to go back to analog.

And even though the ministry is pushing universal coverage, there will be some exceptions, such as those who are unable to provide certain biometrics or are less digitally literate. Teo promised their clearance can be done manually by immigration officers.

Source: Singapore plans to scan your face instead of your passport • The Register

Data safety is a real issue here – how long will the data be collected and for what other purposes will it be used?

Firefox now has private browser-based website translation – no cloud servers required

Web browsers have had tools that let you translate websites for years. But they typically rely on cloud-based translation services like Google Translate or Microsoft’s Bing Translator.

The latest version of Mozilla’s Firefox web browser does things differently. Firefox 118 brings support for Fullpage Translation, which can translate websites entirely in your browser. In other words, everything happens locally on your computer without any data sent to Microsoft, Google, or other companies.

Here’s how it works. Firefox will notice when you visit a website in a supported language that’s different from your default language, and a translate icon will show up in the address bar.

Tap that icon and you’ll see a pop-up window that asks what languages you’d like to translate from and to. If the browser doesn’t automatically detect the language of the website you’re visiting, you can set these manually.

Then click the “Translate” button, and a moment later the text on the page should be visible in your target language. If you’d prefer to go back to the original language, just tap the translate icon again and choose the option that says “show original.”

You can also tap the settings icon in the translation menu and choose to “always translate” or “never translate” a specific language so that you won’t have to manually invoke the translation every time you visit sites in that language.

Now for the bad news: Firefox Fullpage Translation only supports 9 languages so far:

  • Bulgarian
  • Dutch
  • English
  • French
  • German
  • Italian
  • Polish
  • Portuguese
  • Spanish

[…]

Source: Firefox 118 brings browser-based website translation (no cloud servers required… for a handful of supported languages) – Liliputing

Philips Hue / Signify Ecosystem: ‘Collapsing Into Stupidity’

The Philips Hue ecosystem of home automation devices is “collapsing into stupidity,” writes Rachel Kroll, veteran sysadmin and former production engineer at Facebook. “Unfortunately, the idiot C-suite phenomenon has happened here too, and they have been slowly walking down the road to full-on enshittification.” From her blog post: I figured something was up a few years ago when their iOS app would block entry until you pushed an upgrade to the hub box. That kind of behavior would never fly with any product team that gives a damn about their users — want to control something, so you start up the app? Forget it, we are making you placate us first! How is that user-focused, you ask? It isn’t.

Their latest round of stupidity pops up a new EULA and forces you to take it or, again, you can’t access your stuff. But that’s just more unenforceable garbage, so who cares, right? Well, it’s getting worse.

It seems they are planning on dropping an update which will force you to log in. Yep, no longer will your stuff Just Work across the local network. Now it will have yet another garbage “cloud” “integration” involved, and they certainly will find a way to make things suck even worse for you. If you have just the lights and smart outlets, Kroll recommends deleting the units from the Hue Hub and adding them to an IKEA Dirigera hub. “It’ll run them just fine, and will also export them to HomeKit so that much will keep working as well.” That said, it’s not a perfect solution. You will lose motion sensor data, the light level, the temperature of that room, and the ability to set custom behaviors with those buttons.

“Also, there’s no guarantee that IKEA won’t hop on the train to sketchville and start screwing over their users as well,” adds Kroll.

Source: Is the Philips Hue Ecosystem ‘Collapsing Into Stupidity’? – Slashdot

Philips Hue will force users to upload their data to Hue cloud – changing their TOS after you bought the product for not needing an account

Today’s story is about Philips Hue by Signify. They will soon start forcing accounts on all users and upload user data to their cloud. For now, Signify says you’ll still be able to control your Hue lights locally as you’re currently used to, but we don’t know if this may change in the future. The privacy policy allows them to store the data and share it with partners.

[…]

When you open the Philips Hue app you will now be prompted with a new message: Starting soon, you’ll need to be signed in.

[…]

So today, you can choose to not share your information with Signify by not creating an account. But this choice will soon be taken away and all users need to share their data with Philips Hue.

Confirming the news

I didn’t want to cry wolf, so I decided to verify the above statement with Signify. They sadly confirmed:

Twitter conversation with Philips Hue (source: Twitter)

The policy they are referring to is their privacy policy (April 2023 edition, download version).

[…]

When asked what drove this change, the answer is the usual: security. Well Signify, you know what keeps user data even more secure? Not uploading it all to your cloud.

[…]

As a user, we encourage you to reach out to Signify support and voice your concern.

NOTE: Their support form doesn’t work. You can visit their Facebook page though

Dear Signify, please reconsider your decision and do not move forward with it. You’ve reversed bad decisions before. People care about privacy and forcing accounts will hurt the brand in the long term. The pain caused by this is not worth the gain.

Source: Philips Hue will force users to upload their data to Hue cloud

No, Philips / Signify – I have used these devices for years without having to have an account or be connected to the internet. It’s one of the reasons I bought into Hue. Making us give up data to use something we bought after we bought it is a dangerous decision considering the private and exploitable nature of the data, as well as greedy and rude.

T-Mobile US exposes some customer data, but don’t say breach

T-Mobile US has had another bad week on the infosec front – this time stemming from a system glitch that exposed customer account data, followed by allegations of another breach the carrier denied.

According to customers who complained of the issue on Reddit and X, the T-Mobile app was displaying other customers’ data instead of their own – including the strangers’ purchase history, credit card information, and address.

This being T-Mobile’s infamously leaky US operation, people immediately began leaping to the obvious conclusion: another cyber attack or breach.

“There was no cyber attack or breach at T-Mobile,” the telco assured us in an emailed statement. “This was a temporary system glitch related to a planned overnight technology update involving limited account information for fewer than 100 customers, which was quickly resolved.”

Note, as Reddit poster Jman100_JCMP did, T-Mobile means fewer than 100 customers had their data exposed – but far more appear to have been able to view those 100 customers’ data.

As for the breach, the appearance of exposed T-Mobile data was alleged by malware repository vx-underground’s X (Twitter) account. The Register understands T-Mobile examined the data and determined that independently owned T-Mobile dealer, Connectivity Source, was the source – resulting from a breach it suffered in April. We understand T-Mobile believes vx-underground misinterpreted a data dump.

Connectivity Source was indeed the subject of a breach in April, in which an unknown attacker made off with employee data including names and social security numbers – around 17,835 of them from across the US, where Connectivity appears to do business exclusively as a white-labelled T-Mobile US retailer.

Looks like the carier really dodged the bullet on this one – there’s no way Connectivity Source employees could be mistaken for its own staff.

T-Mobile US has already experienced two prior breaches this year, but that hasn’t imperilled the biz much – its profits have soared recently and some accompanying sizable layoffs will probably keep things in the black for the foreseeable future.

Source: T-Mobile US exposes some customer data, but don’t say breach • The Register

Dutch privacy watchdog SDBN sues twitter for collecting and selling data via Mohub (wordfeud, duolingo, etc) without notifying users

The Dutch Data Protection Foundation (SDBN) wants to enforce a mass claim for 11 million people through the courts against social media company X, the former Twitter. Between 2013 and 2021, that company owned the advertising platform MoPub, which, according to the privacy foundation, illegally traded in data from users of more than 30,000 free apps such as Wordfeud, Buienradar and Duolingo.

SDBN has been trying to reach an agreement with X since November last year, but according to the foundation, without success. That is why SDBN is now starting a lawsuit at the Rotterdam court. Central to this is MoPub’s handling of personal data such as religious beliefs, sexual orientation and health. In addition to compensation, SDBN wants this data to be destroyed.

The foundation also believes that users are entitled to profit contributions. A lot of money can be made by sharing personal data with thousands of companies, says SDBN chairman Anouk Ruhaak. Although she says it is difficult to find out exactly which companies had access to the data. “By holding X. Corp liable, we hope not only to obtain compensation for all victims, but also to put a stop to this type of practice,” said Ruhaak. “Unfortunately, these types of companies often only listen when it hurts financially.”

Source: De Ondernemer | Privacystichting SDBN wil via rechter massaclaim bij…

Join the claim here

Google Chrome’s Privacy Sandbox: any site can now query all your habits

[…]

Specifically, the web giant’s Privacy Sandbox APIs, a set of ad delivery and analysis technologies, now function in the latest version of the Chrome browser. Website developers can thus write code that calls those APIs to deliver and measure ads to visitors with compatible browsers.

That is to say, sites can ask Chrome directly what kinds of topics you’re interested in – topics automatically selected by Chrome from your browsing history – so that ads personalized to your activities can be served. This is supposed to be better than being tracked via third-party cookies, support for which is being phased out. There are other aspects to the sandbox that we’ll get to.

While Chrome is the main vehicle for Privacy Sandbox code, Microsoft Edge, based on the open source Chromium project, has also shown signs of supporting the technology. Apple and Mozilla have rejected at least the Topics API for interest-based ads on privacy grounds.

[…]

“The Privacy Sandbox technologies will offer sites and apps alternative ways to show you personalized ads while keeping your personal information more private and minimizing how much data is collected about you.”

These APIs include:

  • Topics: Locally track browsing history to generate ads based on demonstrated user interests without third-party cookies or identifiers that can track across websites.
  • Protected Audience (FLEDGE): Serve ads for remarketing (e.g. you visited a shoe website so we’ll show you a shoe ad elsewhere) while mitigating third-party tracking across websites.
  • Attribution Reporting: Data to link ad clicks or ad views to conversion events (e.g. sales).
  • Private Aggregation: Generate aggregate data reports using data from Protected Audience and cross-site data from Shared Storage.
  • Shared Storage: Allow unlimited, cross-site storage write access with privacy-preserving read access. In other words, you graciously provide local storage via Chrome for ad-related data or anti-abuse code.
  • Fenced Frames: Securely embed content onto a page without sharing cross-site data. Or iframes without the security and privacy risks.

These technologies, Google and industry allies believe, will allow the super-corporation to drop support for third-party cookies in Chrome next year without seeing a drop in targeted advertising revenue.

[…]

“Privacy Sandbox removes the ability of website owners, agencies and marketers to target and measure their campaigns using their own combination of technologies in favor of a Google-provided solution,” James Rosewell, co-founder of MOW, told The Register at the time.

[…]

Controversially, in the US, where lack of coherent privacy rules suit ad companies just fine, the popup merely informs the user that these APIs are now present and active in the browser but requires visiting Chrome’s Settings page to actually manage them – you have to opt-out, if you haven’t already. In the EU, as required by law, the notification is an invitation to opt-in to interest-based ads via Topics.

Source: How Google Chrome’s Privacy Sandbox works and what it means • The Register

Google taken to court in NL for large scale privacy breaches

The Foundation for the Protection of Privacy Interests and the Consumers’ Association are taking the next step in their fight against Google. The tech company is being taken to court today for ‘large-scale privacy violations’.

The proceedings demand, among other things, that Google stop its constant surveillance and sharing of personal data through online advertising auctions and also pay damages to consumers. Since the announcement of this action on May 23, 2023, more than 82,000 Dutch people have already joined the mass claim.

According to the organizations, Google is acting in violation of Dutch and European privacy legislation. The tech giant collects users’ online behavior and location data on an immense scale through its services and products. Without providing enough information or having obtained permission. Google then shares that data, including highly sensitive personal data about health, ethnicity and political preference, for example, with hundreds of parties via its online advertising platform.

Google is constantly monitoring everyone. Even when using third-party cookies – which are invisible – Google continues to collect data through other people’s websites and apps, even when someone is not using its products or services. This enables Google to monitor almost the entire internet behavior of its users.

All these matters have been discussed with Google, to no avail.

The Foundation for the Protection of Privacy Interests represents the interests of users of Google’s products and services living in the Netherlands who have been harmed by privacy violations. The foundation is working together with the Consumers’ Association in the case against Google. Consumers’ Association Claimservice, a partnership between the Consumers’ Association and ConsumersClaim, processes the registrations of affiliated victims.

More than 82,000 consumers have already registered for the Google claim. They demand compensation of 750 euros per participant.

A lawsuit by the American government against Google starts today in the US . Ten weeks have been set aside for this. This mainly revolves around the power of Google’s search engine.

Essentially, Google is accused of entering into exclusive agreements to guarantee the use of its search engine. These are agreements that prevent alternative search engines from being pre-installed, or from Google’s search app being removed.

Source: Google voor de rechter gedaagd wegens ‘grootschalige privacyschendingen’ – Emerce (NL)

Mozilla investigates 25 major car brands and finds privacy is shocking

[…]

The foundation, the Firefox browser maker’s netizen-rights org, assessed the privacy policies and practices of 25 automakers and found all failed its consumer privacy tests and thereby earned its Privacy Not Included (PNI) warning label.

If you care even a little about privacy, stay as far away from Nissan’s cars as you possibly can

In research published Tuesday, the org warned that manufacturers may collect and commercially exploit much more than location history, driving habits, in-car browser histories, and music preferences from today’s internet-connected vehicles. Instead, some makers may handle deeply personal data, such as – depending on the privacy policy – sexual activity, immigration status, race, facial expressions, weight, health, and even genetic information, the Mozilla team found.

Cars may collect at least some of that info about drivers and passengers using sensors, microphones, cameras, phones, and other devices people connect to their network-connected cars, according to Mozilla. And they collect even more info from car apps – such as Sirius XM or Google Maps – plus dealerships, and vehicle telematics.

Some car brands may then share or sell this information to third parties. Mozilla found 21 of the 25 automakers it considered say they may share customer info with service providers, data brokers, and the like, and 19 of the 25 say they can sell personal data.

More than half (56 percent) also say they share customer information with the government or law enforcement in response to a “request.” This isn’t necessarily a court-ordered warrant, and can also be a more informal request.

And some – like Nissan – may also use this private data to develop customer profiles that describe drivers’ “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.”

Yes, you read that correctly. According to Mozilla’s privacy researchers, Nissan says it can infer how smart you are, then sell that assessment to third parties.

[…]

Nissan isn’t the only brand to collect information that seems completely irrelevant to the vehicle itself or the driver’s transportation habits.

Kia mentions sex life,” Caltrider said. “General Motors and Ford both mentioned race and sexual orientation. Hyundai said that they could share data with government and law enforcement based on formal or informal requests. Car companies can collect even more information than reproductive health apps in a lot of ways.”

[…]

the Privacy Not Included team contacted Nissan and all of the other brands listed in the research: that’s Lincoln, Mercedes-Benz, Acura, Buick, GMC, Cadillac, Fiat, Jeep, Chrysler, BMW, Subaru, Dacia, Hyundai, Dodge, Lexus, Chevrolet, Tesla, Ford, Honda, Kia, Audi, Volkswagen, Toyota and Renault.

Only three – Mercedes-Benz, Honda, and Ford – responded, we’re told.

“Mercedes-Benz did answer a few of our questions, which we appreciate,” Caltrider said. “Honda pointed us continually to their public privacy documentation to answer your questions, but they didn’t clarify anything. And Ford said they discussed our request internally and made the decision not to participate.”

This makes Mercedes’ response to The Register a little puzzling. “We are committed to using data responsibly,” a spokesperson told us. “We have not received or reviewed the study you are referring to yet and therefore decline to comment to this specifically.”

A spokesperson for the four Fiat-Chrysler-owned brands (Fiat, Chrysler, Jeep, and Dodge) told us: “We are reviewing accordingly. Data privacy is a key consideration as we continually seek to serve our customers better.”

[…]

The Mozilla Foundation also called out consent as an issue some automakers have placed in a blind spot.

“I call this out in the Subaru review, but it’s not limited to Subaru: it’s the idea that anybody that is a user of the services of a connected car, anybody that’s in a car that uses services is considered a user, and any user is considered to have consented to the privacy policy,” Caltrider said.

Opting out of data collection is another concern.

Tesla, for example, appears to give users the choice between protecting their data or protecting their car. Its privacy policy does allow users to opt out of data collection but, as Mozilla points out, Tesla warns customers: “If you choose to opt out of vehicle data collection (with the exception of in-car Data Sharing preferences), we will not be able to know or notify you of issues applicable to your vehicle in real time. This may result in your vehicle suffering from reduced functionality, serious damage, or inoperability.”

While technically this does give users a choice, it also essentially says if you opt out, “your car might become inoperable and not work,” Caltrider said. “Well, that’s not much of a choice.”

[…]

Source: Mozilla flunks 25 major car brands for data privacy fails • The Register

Australian Government, Of All Places, Says Age Verification Is A Privacy & Security Nightmare

In the past I’ve sometimes described Australia as the land where internet policy is completely upside down. Rather than having a system that protects intermediaries from liability for third party content, Australia went the opposite direction. Rather than recognizing that a search engine merely links to content and isn’t responsible for the content at those links, Australia has said that search engines can be held liable for what they link to. Rather than protect the free expression of people on the internet who criticize the rich and powerful, Australia has extremely problematic defamation laws that result in regular SLAPP suits and suppression of speech. Rather than embrace encryption that protects everyone’s privacy and security, Australia requires companies to break encryption, insisting only criminals use it.

It’s basically been “bad internet policy central,” or the place where good internet policy goes to die.

And, yet, there are some lines that even Australia won’t cross. Specifically, the Australian eSafety commission says that it will not require adult websites to use age verification tools, because it would put the privacy and security of Australians’ data at risk. (For unclear reasons, the Guardian does not provide the underlying documents, so we’re fixing that and providing both the original roadmap and the Australian government’s response

[…]

Of course, in France, the Data Protection authority released a paper similarly noting that age verification was a privacy and security nightmare… and the French government just went right on mandating the use of the technology. In Australia, the eSafety Commission pointed to the French concerns as a reason not to rush into the tech, meaning that Australia took the lessons from French data protection experts more seriously than the French government did.

And, of course, here in the US, the Congressional Research Service similarly found serious problems with age verification technology, but it hasn’t stopped Congress from releasing a whole bunch of “save the children” bills that are built on a foundation of age verification.

[…]

Source: Australian Government, Of All Places, Says Age Verification Is A Privacy & Security Nightmare | Techdirt

Companies are recording your conversations whilst you are on hold with them

Is Achmea or Bol.com customer service putting you on hold? Then everything you say can still be heard by some of their employees. This is evident from research by Radar.

When you call customer service, you often hear: “Please note: this conversation may be recorded for training purposes.” Nothing special. But if you call the insurer Zilveren Kruis, you will also hear: “Note: Even if you are on hold, our quality employees can hear what you are saying.”

Striking, because the Dutch Data Protection Authority states that recording customers ‘on hold’ is not allowed. Companies are allowed to record the conversation, for example to conclude a contract or to improve the service.

Both mortgage provider Woonfonds and insurers Zilveren Kruis, De Friesland and Interpolis confirm that the recording tape continues to run if you are on hold with them, while this violates privacy rules.

Bol.com also continues to eavesdrop on you while you are on hold, the webshop confirms. She also gives the same reason for this: “It is technically not possible to temporarily stop the recording and start it again when the conversation starts again.”KLM, Ziggo, Eneco, Vattenfall, T-Mobile, Nationale Nederlanden, ASR, ING and Rabobank say they don’t answer their customers while they are on hold.

Source: Diverse bedrijven waaronder bol.com nemen gesprekken ‘in de wacht’ op – Emerce

China floats rules for facial recognition technology – they are good and be great if the govt was bound by them too!

China has released draft regulations to govern the country’s facial recognition technology that include prohibitions on its use to analyze race or ethnicity.

According to the the Cyberspace Administration of China(CAC), the purpose is to “regulate the application of face recognition technology, protect the rights and interests of personal information and other personal and property rights, and maintain social order and public safety” as outlined by a smattering of data security, personal information, and network laws.

The draft rules, which are open for comments until September 7, include some vague directives not to use face recognition technology to disrupt social order, endanger national security, or infringe on the rights of individuals and organizations.

The rules also state that facial recognition tech must be used only when there is a specific purpose and sufficient necessity, strict protection measures are taken, and only when non-biometric measures won’t do.

It makes requirements to obtain consent before processing face information, except for cases where it’s not required, which The Reg assumes means for individuals such as prisoners and in instances of national security. Parental or guardian consent is needed for those under the age of 14.

Building managers can’t require its use to enter and exit property – they must provide alternative measures of verifying a personal identity for those who want it.

It also can’t be leaned into for “major personal interests” such as social assistance and real estate disposal. For that, manual verification of personal identity must be used with facial recognition used only as an auxiliary means of verifying personal identity.

And collecting images for internal management should only be done in a reasonably sized area.

In businesses like hotels, banks, airports, art galleries, and more, the tech should not be used to verify personal identity. If the individual chooses to link their identity to the image, they should be informed either verbally or in writing and provide consent.

Collecting images is also not allowed in private spaces like hotel rooms, public bathrooms, and changing rooms.

Furthermore, those using facial surveillance techniques must display reminder signs, and personal images along with identification information must also be kept confidential, and only anonymized data may be saved.

Under the draft regs, those that store face information of more than 10,000 people must register with a local branch of the CAC within 30 working days.

Most interesting, however, is Article 11, which, when translated from Chinese via automated tools, reads:

No organization or individual shall use face recognition technology to analyze personal race, ethnicity, religion, sensitive personal information such as beliefs, health status, social class, etc.

The CAC does not say if the Chinese Communist Party counts as an “organization.”

Human rights groups have credibly asserted that Uyghurs are routinely surveilled using facial recognition technology, in addition to being incarcerated, required to perform forced labor, re-educated to abandon their beliefs and cultural practices, and may even be subjected to sterilization campaigns.

Just last month, physical security monitoring org IPVM reported it came into possession of a contract between China-based Hikvision and Hainan Province’s Chengmai County for $6 million worth of cameras that could detect whether a person was ethnically Uyghur using minority recognition technology.

Hikvision denied the report and said it last provided such functionality in 2018.

Beyond facilitating identification of Uyghurs, it’s clear the cat is out of the bag when it comes to facial recognition technology in China by both government and businesses alike. Local police use it to track down criminals and its use feeds into China’s social credit system.

“‘Sky Net,’ a facial recognition system that can scan China’s population of about 1.4 billion people in a second, is being used in 16 Chinese cities and provinces to help police crackdown on criminals and improve security,” said state-sponsored media in 2018.

Regardless, the CAC said those violating the new draft rules once passed would be held to criminal and civil liability.

Source: China floats rules for facial recognition technology • The Register

Reddit Wins, Doesn’t Have to NARC on Users Who Discussed Torrenting

This weekend, a federal court tossed a subpoena in a case against the internet service provider Grande that would require Reddit to reveal the identities of anonymous users that torrent movies.

The case was originally filed in 2021 by 20 movie producers against Grande Communications in the Western District of Texas federal court. The lawsuit claims that Grande is committing copyright infringement against the producers for allegedly ignoring the torrenting of 45 of their movies that occurred on its networks. As part of the case, the plaintiffs attempted to subpoena Reddit for IP addresses and user data for accounts that openly discussed torrenting on the platform. This weekend, Magistrate Judge Laurel Beeler denied the subpoena—meaning Reddit is off the hook.

“The plaintiffs thus move to compel Reddit to produce the identities of its users who are the subject of the plaintiffs’ subpoena,” Magistrate Judge Beeler wrote in her decision. “The issue is whether that discovery is permissible despite the users’ right to speak anonymously under the First Amendment. The court denies the motion because the plaintiffs have not demonstrated a compelling need for the discovery that outweighs the users’ First Amendment right to anonymous speech.”

Reddit was previously cleared of a similar subpoena in a similar lawsuit by the same judge back in May as reported by ArsTechnica. Reddit was asked to unmask eight users who were active in piracy threads on the platform, but the social media website pulled the same First Amendment defense.

 

Source: Reddit Wins, Doesn’t Have to NARC on Users Who Discussed Torrenting

New privacy deal allows US tech giants to continue storing European user data on American servers

Nearly three years after a 2020 court decision threatened to grind transatlantic e-commerce to a halt, the European Union has adopted a plan that will allow US tech giants to continue storing data about European users on American soil. In a decision announced Monday, the European Commission approved the Trans-Atlantic Data Privacy Framework. Under the terms of the deal, the US will establish a court Europeans can engage with if they feel a US tech platform violated their data privacy rights. President Joe Biden announced the creation of the Data Protection Review Court in an executive order he signed last fall. The court can order the deletion of user data and impose other remedial measures. The framework also limits access to European user data by US intelligence agencies.

The Trans-Atlantic Data Privacy Framework is the latest chapter in a saga that is now more than a decade in the making. It was only earlier this year the EU fined Meta a record-breaking €1.2 billion after it found that Facebook’s practice of moving EU user data to US servers violated the bloc’s digital privacy laws. The EU also ordered Meta to delete the data it already had stored on its US servers if the company didn’t have a legal way to keep that information there by the fall. As The Wall Street Journal notes, Monday’s agreement should allow Meta to avoid the need to delete any data, but the company may end up still paying the fine.

Even with a new agreement in place, it probably won’t be smooth sailing just yet for the companies that depend the most on cross-border data flows. Max Schrems, the lawyer who successfully challenged the previous Safe Harbor and Privacy Shield agreements that governed transatlantic data transfers before today, told The Journal he plans to challenge the new framework. “We would need changes in US surveillance law to make this work and we simply don’t have it,” he said. For what it’s worth, the European Commission says it’s confident it can defend its new framework in court.

Source: New privacy deal allows US tech giants to continue storing European user data on American servers | Engadget

Another problem is that the US side is not enshrined in law, but in a presidential decree, which can be revoked at any time.

Google Says It’ll Scrape Everything You Post Online for AI

Google updated its privacy policy over the weekend, explicitly saying the company reserves the right to scrape just about everything you post online to build its AI tools. If Google can read your words, assume they belong to the company now, and expect that they’re nesting somewhere in the bowels of a chatbot.

“Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public,” the new Google policy says. “For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”

Fortunately for history fans, Google maintains a history of changes to its terms of service. The new language amends an existing policy, spelling out new ways your online musings might be used for the tech giant’s AI tools work.

[…]

This is an unusual clause for a privacy policy. Typically, these policies describe ways that a business uses the information that you post on the company’s own services. Here, it seems Google reserves the right to harvest and harness data posted on any part of the public web, as if the whole internet is the company’s own AI playground. Google did not immediately respond to a request for comment.

[…]

Source: Google Says It’ll Scrape Everything You Post Online for AI

The rest of the article goes into Gizomodo’s luddite War Against AI ™ luddite language, unfortunately, because it misses the point that basically this is nothing much new – Google has been able to use any information you type into any of their products for pretty much any purpose (eg advertising, email scanning, etc) for decades (which is why I don’t use Chrome). However it is something that most people simply don’t realise.

Sacramento Sheriff is sharing license plate reader data with anti-abortion states, records show

In 2015, Democratic Elk Grove Assemblyman Jim Cooper voted for Senate Bill 34, which restricted law enforcement from sharing automated license plate reader (ALPR) data with out-of-state authorities. In 2023, now-Sacramento County Sheriff Cooper appears to be doing just that.

The Electronic Frontier Foundation (EFF) a digital rights group, has sent Cooper a letter requesting that the Sacramento County Sheriff’s Office cease sharing ALPR data with out-of-state agencies that could use it to prosecute someone for seeking an abortion.

According to documents that the Sheriff’s Office provided EFF through a public records request, it has shared license plate reader data with law enforcement agencies in states that have passed laws banning abortion, including Alabama, Oklahoma and Texas.

[…]

Schwartz said that a sheriff in Texas, Idaho or any other state with an abortion ban on the books could use that data to track people’s movements around California, knowing where they live, where they work and where they seek reproductive medical care, including abortions.

The Sacramento County Sheriff’s Office isn’t the only one sharing that data; in May, EFF released a report showing that 71 law enforcement agencies in 22 California counties — including Sacramento County — were sharing such data. The practice is in violation of a 2015 law that states “a (California law enforcement) agency shall not sell, share, or transfer ALPR information, except to another (California law enforcement) agency, and only as otherwise permitted by law.”

[…]

 

Source: Sacramento Sheriff is sharing license plate reader data with anti-abortion states, records show

France Allows Police to Remotely Turn On GPS, Camera, Audio on Phones

Amidst ongoing protests in France, the country has just passed a new bill that will allow police to remotely access suspects’ cameras, microphones, and GPS on cell phones and other devices.

As reported by Le Monde, the bill has been criticized by the French people as a “snoopers” charter that allows police unfettered access to the location of its citizens. Moreover, police can activate cameras and microphones to take video and audio recordings of suspects. The bill will reportedly only apply to suspects in crimes that are punishable by a minimum of five years in jail

[…]

French politicians added an amendment that orders judge approval for any surveillance conducted under the scope of the bill and limits the duration of surveillance to six months

[…]

In 2021, The New York Times reported that the French Parliament passed a bill that would expand the French police force’s ability to monitor civilians using drones. French President Emmanuel Macron argued at the time that the bill was meant to protect police officers from increasingly violent protestors.

[…]

 

Source: France Passes Bill Allowing Police to Remotely Access Phones

$6.3b US firm Telesign breached GDPR, reputation-scoring half of the population of the planet with mobiles

A US-based fraud prevention company is in hot water over allegations it not only collected data from millions of EU citizens and processed it using automated tools without their knowledge, but that it did so in the United States, all in violation of the EU’s data protection rules.

The complaint was filed by Austrian privacy advocacy group noyb, helmed by lawyer Max Schrems, and it doesn’t pull any punches in its claims that TeleSign, through its former Belgian parent company BICS, secretly collected data on cellphone users around the world.

That data, noyb alleges, was fed into an automated system that generates “reputation scores” that TeleSign sells to its customers, which includes TikTok, Salesforce, Microsoft and AWS, among others, for verifying the identity of a person behind a phone number and preventing fraud.

BICS, which acquired TeleSign in 2017, describes itself as “a global provider of international wholesale connectivity and interoperability services,” in essence operating as an interchange for various national cellular networks. Per noyb, BICS operates in more than 200 countries around the world and “gets detailed information (e.g. the regularity of completed calls, call duration, long-term inactivity, range activity, or successful incoming traffic) [on] about half of the worldwide mobile phone users.”

That data is regularly shared with TeleSign, noyb alleges, without any notification to the customers whose data is being collected and used.

[…]

In its complaint, an auto-translated English version of which was reviewed by The Register, noyb alleges that TeleSign is in violation of the GDPR’s provisions that ban use of automated profiling tools, as well as rules that require affirmative consent be given to process EU citizen’s data.

[…]

When BICS acquired TeleSign in 2017, it began to fall under the partial control of BICS’ parent company, Belgian telecom giant Proximus. Proximus held a partial stake in BICS, which Proximus spun off from its own operations in 1997.

In 2021, Proximus bought out BICS’ other shareholders, making it the sole owner of both the telecom interchange and TeleSign.

With that in mind, noyb is also leveling charges against Proximus and BICS. In its complaint, noyb said Proximus was asked by EU citizens from various countries to provide records of the data TeleSign processed, as is their right under Article 15 of the GDPR.

The complainants weren’t given the information they requested, says noyb, and claims what was handed over was simply a template copy of the EU’s standard contractual clause (SCC), which has been used by businesses transmitting data between the EU and US while the pair try to work out data transfer rules that Schrems won’t get struck down in court.

[…]

Noyb is seeking cessation of all data transfers from BICS to TeleSign, processing of said data, and is requesting deletion of all unlawfully transmitted data. It’s also asking for Belgian data protection authorities to fine Proximus, which noyb said could reach as high as €236 million ($257 million) – a mere 4 percent of Proximus’s global turnover.

[…]

Source: US firm ‘breached GDPR’ by reputation-scoring EU citizens • The Register

This firm is absolutely massive, yet it’s a smaller part of BICS and chances are that you’ve never ever heard of either of them!

Fitbit Privacy & security guide – no one told me it would send my data to the US

As of January 14, 2021, Google officially became the owner of Fitbit. That worried many privacy conscious users. However, Google promised that “Fitbit users’ health and wellness data won’t be used for Google ads and this data will be kept separate from other Google ad data” as part of the deal with global regulators when they bought Fitbit. This is good.

And Fitbit seems to do an OK job with privacy and security. It de-identifies the data it collects so it’s (hopefully) not personally identifiable. We say hopefully because, depending on the kind of data, it’s been found to be pretty easy to de-anonymize these data sets and track down an individual’s patterns, especially with location data. So, be aware with Fitbit—or any fitness tracker—you are strapping on a device that tracks your location, heart rate, sleep patterns, and more. That’s a lot of personal information gathered in one place.

What is not good is what can happen with all this very personal health data if others aren’t careful. A recent report showed that health data for over 61 million fitness tracker users, including both Fitbit and Apple, was exposed when a third-party company that allowed users to sync their health data from their fitness trackers did not secure the data properly. Personal information such as names, birthdates, weight, height, gender, and geographical location for Fitbit and other fitness-tracker users was left exposed because the company didn’t password protect or encrypt their database. This is a great reminder that yes, while Fitbit might do a good job with their own security, anytime you sync or share that data with anyone else, it could be vulnerable.

[…]

e Fitbit app does allow for period tracking though. And the app, like most wearable tracking apps, collects a whole bunch of person, body-related data that could potentially be used to tell if a user is pregnant.

Fortunately, Fitbit doesn’t sell this data but it does say it can share some personal data for interest-based advertising. Fitbit also can share your wellness data with other apps, insurers, and employers if you sign up for that and give your consent.

[…]

Fitbit isn’t the wearable we’d trust the most with our private reproductive health data. Apple, Garmin, Oura all make us feel a bit more comfortable with this personal information.

Source: Fitbit | Privacy & security guide | Mozilla Foundation

So when installing one it says it needs to process your data in the USA – which basically means it’s up for grabs for all and sundry. There is a reason the EU has the GDPR. But why does it need to send data anywhere other than your phone anyway?!

This is something that almost no-one mentions when you read the reviews on these things.

Amazon’s Ring used to spy on customers, children, FTC says in privacy settlement

A former employee of Amazon.com’s Ring doorbell camera unit spied for months on female customers in 2017 with cameras placed in bedrooms and bathrooms, the Federal Trade Commission said in a court filing on Wednesday when it announced a $5.8 million settlement with the company over privacy violations.

Amazon also agreed to pay $25 million to settle allegations it violated children’s privacy rights when it failed to delete Alexa recordings at the request of parents and kept them longer than necessary, according to a court filing in federal court in Seattle that outlined a separate settlement.

The FTC settlements are the agency’s latest effort to hold Big Tech accountable for policies critics say place profits from data collection ahead of privacy.

The FTC is also probing Amazon.com’s $1.7 billion deal to buy iRobot Corp (IRBT.O), which was announced in August 2022 in Amazon’s latest push into smart home devices, and has a separate antitrust probe underway into Amazon.

[…]

The FTC said Ring gave employees unrestricted access to customers’ sensitive video data: “As a result of this dangerously overbroad access and lax attitude toward privacy and security, employees and third-party contractors were able to view, download, and transfer customers’ sensitive video data.”

In one instance in 2017, an employee of Ring viewed videos made by at least 81 female customers and Ring employees using Ring products. “Undetected by Ring, the employee continued spying for months,” the FTC said.

[…]

In May 2018, an employee gave information about a customer’s recordings to the person’s ex-husband without consent, the complaint said. In another instance, an employee was found to have given Ring devices to people and then watched their videos without their knowledge, the FTC said.

[…]

rules against deceiving consumers who used Alexa. For example, the FTC complaint says that Amazon told users it would delete voice transcripts and location information upon request, but then failed to do so.

“The unlawfully retained voice recordings provided Amazon with a valuable database for training the Alexa algorithm to understand children, benefiting its bottom line at the expense of children’s privacy,” the FTC said.

Source: Amazon’s Ring used to spy on customers, FTC says in privacy settlement

The total settlement of $30m is insanely low considering the scale of the violations and the continuing nature of them.

Meta ordered to suspend Facebook EU data flows as it’s hit with record €1.2BN privacy fine under GDPR – 10 years and 3 court cases later

[…]

Today the European Data Protection Board (EDPB) announced that Meta has been fined €1.2 billion (close to $1.3 billion) — which the Board confirmed is the largest fine ever issued under the bloc’s General Data Protection Regulation (GDPR). (The prior record goes to Amazon which was stung for $887 million for misusing customers data for ad targeting back in 2021.)

Meta’s sanction is for breaching conditions set out in the pan-EU regulation governing transfers of personal data to so-called third countries (in this case the US) without ensuring adequate protections for people’s information.

European judges have previously found U.S. surveillance practices to conflict with EU privacy rights.

[…]

The decision emerging out of the Irish DPC flows from a complaint made against Facebook’s Irish subsidiary almost a decade ago, by privacy campaigner Max Schrems — who has been a vocal critic of Meta’s lead data protection regulator in the EU, accusing the Irish privacy regulator of taking an intentionally long and winding path in order to frustrate effective enforcement of the bloc’s rulebook.

On the substance of his complaint, Schrems argues that the only sure-fire way to fix the EU-U.S. data flows doom loop is for the U.S. to grasp the nettle and reform its surveillance practices.

Responding to today’s order in a statement (via his privacy rights not-for-profit, noyb), he said: “We are happy to see this decision after ten years of litigation. The fine could have been much higher, given that the maximum fine is more than 4 billion and Meta has knowingly broken the law to make a profit for ten years. Unless US surveillance laws get fixed, Meta will have to fundamentally restructure its systems.”

[… ]

This suggests the Irish regulator is routinely under-enforcing the GDPR on the most powerful digital platforms and doing so in a way that creates additional problems for efficient functioning of the regulation since it strings out the enforcement process. (In the Facebook data flows case, for example, objections were raised to the DPC’s draft decision last August — so it’s taken some nine months to get from that draft to a final decision and suspension order now.) And, well, if you string enforcement out for long enough you may allow enough time for the goalposts to be moved politically that enforcement never actually needs to happen. Which, while demonstrably convenient for data-mining tech giants like Meta, does make a mockery of citizens’ fundamental rights.

As noted above, with today’s decision, the DPC is actually implementing a binding decision taken by the EDPB last month in order to settle ongoing disagreement over Ireland’s draft decision — so much of the substance of what’s being ordered on Meta today comes, not from Dublin, but from the bloc’s supervisor body for privacy regulators.

[…]

n further public remarks today, Schrems once again hit out at the DPC’s approach — accusing the regulator of essentially working to thwart enforcement of the GDPR. “It took us ten years of litigation against the Irish DPC to get to this result. We had to bring three procedures against the DPC and risked millions of procedural costs. The Irish regulator has done everything to avoid this decision but was consistently overturned by the European Courts and institutions. It is kind of absurd that the record fine will go to Ireland — the EU Member State that did everything to ensure that this fine is not issued,” he said.

[…]

Earlier reports have suggested the European Commission could adopt the new EU-U.S. data deal in July, although it has declined to provide a date for this since it says multiple stakeholders are involved in the process.

Such a timeline would mean Meta gets a new escape hatch to avoid having to suspend Facebook’s service in the EU; and can keep relying on this high level mechanism so long as it is stands.

If that’s how the next section of this torturous complaint saga plays out it will mean that a case against Facebook’s illegal data transfers which dates back almost ten years at this point will, once again, be left twisting in the wind — raising questions about whether it’s really possible for Europeans to exercise legal rights set out in the GDPR? (And, indeed, whether deep-pocketed tech giants, whose ranks are packed with well-paid lawyers and lobbyists, can be regulated at all?)

[…]

Analysis on five years of the GDPR, put out earlier this month by the Irish Council for Civil Liberties (ICCL), dubs the enforcement situation a “crisis” — warning: “Europe’s failure to enforce the GDPR exposes everyone to acute hazard in the digital age and fingering Ireland’s DPA as a leading cause of enforcement failure against Big Tech.”

And the ICCL points the finger of blame squarely at Ireland’s DPC.

“Ireland continues to be the bottleneck of enforcement: It delivers few draft decisions on major cross-border cases, and when it does eventually do so other European enforcers routinely vote by majority to force it to take tougher enforcement action,” the report argues — before pointing out that: “Uniquely, 75% of Ireland’s GDPR investigation decisions in major EU cases were overruled by majority vote of its European counterparts at the EDPB, who demand tougher enforcement action.”

The ICCL also highlights that nearly all (87%) of cross-border GDPR complaints to Ireland repeatedly involve the same handful of Big Tech companies: Google, Meta (Facebook, Instagram, WhatsApp), Apple, TikTok, and Microsoft. But says many complaints against these tech giants never even get a full investigation — thereby depriving complaints of the ability to exercise their rights.

The analysis points out that the Irish DPC chooses “amicable resolution” to conclude the vast majority (83%) of cross-border complaints it receives (citing the oversight body’s own statistics) — further noting: “Using amicable resolution for repeat offenders, or for matters likely to impact many people, contravenes European Data Protection Board guidelines.”

[…]

The reality is a patchwork of problems frustrate effective enforcement across the bloc as you might expect with decentralized oversight structure which factors in linguistic and culture differences across 27 Member States and varying opinions on how best to approach oversight atop big (and very personal) concepts like privacy which may mean very different things to different people.

Schrems’ privacy rights not-for-profit, noyb, has been collating information on this patchwork of GDPR enforcement issues — which include things like under-resourcing of smaller agencies and a general lack of in-house expertise to deal with digital issues; transparency problems and information blackholes for complainants; cooperation issues and legal barriers frustrating cross-border complaints; and all sorts of ‘creative’ interpretations of complaints “handling” — meaning nothing being done about a complaint still remains a common outcome — to name just a few of the issues it’s encountered.

[…]

Source: Meta ordered to suspend Facebook EU data flows as it’s hit with record €1.2BN privacy fine under GDPR | TechCrunch

The article contains the history of the court cases Schrems had to enter to get the Ireland and the EU to do anything about data sharing problems – it’s an interesting read.

Online age verification is coming, and privacy is on the chopping block

A spate of child safety rules might make going online in a few years very different, and not just for kids. In 2022 and 2023, numerous states and countries are exploring age verification requirements for the internet, either as an implicit demand or a formal rule. The laws are positioned as a way to protect children on a dangerous internet. But the price of that protection might be high: nothing less than the privacy of, well, everyone.

Government agencies, private companies, and academic researchers have spent years seeking a way to solve the thorny question of how to check internet users’ ages without the risk of revealing intimate information about their online lives. But after all that time, privacy and civil liberties advocates still aren’t convinced the government is ready for the challenge.

“When you have so many proposals floating around, it’s hard to ensure that everything is constitutionally sound and actually effective for kids,” Cody Venzke, a senior policy counsel at the American Civil Liberties Union (ACLU), tells The Verge. “Because it’s so difficult to identify who’s a kid online, it’s going to prevent adults from accessing content online as well.”

In the US and abroad, lawmakers want to limit children’s access to two things: social networks and porn sites. Louisiana, Arkansas, and Utah have all passed laws that set rules for underage users on social media. Meanwhile, multiple US federal bills are on the table, and so are laws in other countries, like the UK’s Online Safety Bill. Some of these laws demand specific features from age verification tools. Others simply punish sites for letting anyone underage use them — a more subtle request for verification.

Online age verification isn’t a new concept. In the US, laws like the Children’s Online Privacy Protection Act (COPPA) already apply special rules to people under 13. And almost everyone who has used the internet — including major platforms like YouTube and Facebook — has checked a box to access adult content or entered a birth date to create an account. But there’s also almost nothing to stop them from faking it.

As a result, lawmakers are calling for more stringent verification methods. “From bullying and sex trafficking to addiction and explicit content, social media companies subject children and teens to a wide variety of content that can hurt them, emotionally and physically,” Senator Tom Cotton (R-AR), the backer of the Protect Kids Online Act, said. “Just as parents safeguard their kids from threats in the real world, they need the opportunity to protect their children online.”

Age verification systems fall into a handful of categories. The most common option is to rely on a third party that knows your identity — by directly validating a credit card or government-issued ID, for instance, or by signing up for a digital intermediary like Allpasstrust, the service Louisianans must use for porn access.

More experimentally, there are solutions that estimate a user’s age without an ID. One potential option, which is already used by Facebook and Instagram, would use a camera and facial recognition to guess whether you’re 18. Another, which is highlighted as a potential age verification solution by France’s National Commission on Informatics and Liberty (CNIL), would “guess” your age based on your online activity.

As pointed out by CNIL’s report on various online age verification options, all these methods have serious flaws. CNIL notes that identifying someone’s age with a credit card would be relatively easy since the security infrastructure is already there for online payments. But some adult users — especially those with lower incomes — may not have a card, which would seriously limit their ability to access online services. The same goes for verification methods using government-issued IDs. Children can also snap up a card that’s lying around the house to verify their age.

“As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on”

Similarly, the Congressional Research Service (CRS) has expressed concerns about online age verification. In a report it updated in March, the US legislature’s in-house research institute found that many kids aged 16 to 19 might not have a government-issued ID, such as a driver’s license, that they can use to verify their age online. While it says kids could use their student ID instead, it notes that they may be easier to fake than a government-issued ID. The CRS isn’t totally on board with relying on a national digital ID system for online age verification either, as it could “raise privacy and security concerns.”

Face-based age detection might seem like a quick fix to these concerns. And unlike a credit card — or full-fledged facial identification tools — it doesn’t necessarily tell a site who you are, just whether it thinks you’re over 18.

But these systems may not accurately identify the age of a person. Yoti, the facial analysis service used by Facebook and Instagram, claims it can estimate the age of people 13 to 17 years old as under 25 with 99.93 percent accuracy while identifying kids that are six to 11 years old as under 13 with 98.35 percent accuracy. This study doesn’t include any data on distinguishing between young teens and older ones, however — a crucial element for many young people.

Although Yoti claims its system has no “discernible bias across gender or skin tone,” previous research indicates that facial recognition services are less reliable for people of color, gender-nonconforming people, and people with facial differences or asymmetry. This would, again, unfairly block certain people from accessing the internet.

It also poses a host of privacy risks, as the companies that capture facial recognition data would need to ensure that this biometric data doesn’t get stolen by bad actors. UK civil liberties group Big Brother Watch argues that “face prints’ are as sensitive as fingerprints” and that “collecting biometric data of this scale inherently puts people’s privacy at risk.” CNIL points out that you could mitigate some risks by performing facial recognition locally on a user’s device — but that doesn’t solve the broader problems.

Inferring ages based on browsing history raises even more problems. This kind of inferential system has been implemented on platforms like Facebook and TikTok, both of which use AI to detect whether a user is under the age of 13 based on their activity on the platform. That includes scanning a user’s activity for “happy birthday” messages or comments that indicate they’re too young to have an account. But the system hasn’t been explored on a larger scale — where it could involve having an AI scan your entire browsing history and estimate your age based on your searches and the sites you interact with. That would amount to large-scale digital surveillance, and CNIL outright calls the system “intrusive.” It’s not even clear how well it would work.

In France, where lawmakers are working to restrict access to porn sites, CNIL worked with Ecole Polytechnique professor Olivier Blazy to develop a solution that attempts to minimize the amount of user information sent to a website. The proposed method involves using an ephemeral “token” that sends your browser or phone a “challenge” when accessing an age-restricted website. That challenge would then get relayed to a third party that can authenticate your age, like your bank, internet provider, or a digital ID service, which would issue its approval, allowing you to access the website.

The system’s goal is to make sure a user is old enough to access a service without revealing any personal details, either to the website they’re using or the companies and governments providing the ID check. The third party “only knows you are doing an age check but not for what,” Blazy explains to The Verge, and the website would not know which service verified your age nor any of the details from that transaction.

Blazy hopes this system can prevent very young children from accessing explicit content. But even with this complex solution, he acknowledges that users in France will be able to get around the method by using a virtual private network (VPN) to conceal their location. This is a problem that plagues nearly any location-specific verification system: as long as another government lets people access a site more easily, users can route their traffic through it. The only surefire solution would be draconian crackdowns on privacy tools that would dramatically compromise freedom online.

Some governments are trying to offer a variety of options and let users pick between them. A report from the European Parliament Think Tank, an in-house department that helps shape legislation, highlights an EU “browser-based interoperable age verification method” called euCONSENT, which will allow users to verify their identity online by choosing from a network of approved third-party services. Since this would give users the ability to choose the verification they want to use, this means one service might ask a user to upload an official government document, while another might rely on facial recognition.

To privacy and civil liberties advocates, none of these solutions are ideal. Venzke tells The Verge that implementing age verification systems encourages a system that collects our data and could pave the way for more surveillance in the future. “Bills that are trying to establish inferences about how old you are or who you are based on that already existing capitalistic surveillance, are just threatening to legitimize that surveillance,” Venzke says. “As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on.”

Age verification laws “are going to face a very tough battle in court”

The Electronic Frontier Foundation, a digital rights group, similarly argues that all age verification solutions are “surveillance systems” that will “lead us further towards an internet where our private data is collected and sold by default.”

Even some strong supporters of child safety bills have expressed concerns about making age verification part of them. Senator Richard Blumenthal (D-CT), one of the backers of the Kids Online Safety Act, objected to the idea in a call with reporters earlier this month. In a statement, he tells The Verge that “age verification would require either a national database or a goldmine of private information on millions of kids in Big Tech’s hands” and that “the potential for exploitation and misuse would be huge.” (Despite this, the EFF believes that KOSA’s requirements would inevitably result in age verification mandates anyway.)

In the US, it’s unclear whether online age verification would stand up under legal scrutiny at all. The US court system has already struck down efforts to implement online age verification several times in the past. As far back as 1997, the Supreme Court ruled parts of the 1996 Communications Decency Act unconstitutional, as it imposed restrictions on “knowing transmission of obscene or indecent messages” and required age verification online. More recently, a federal court found in 2016 that a Louisiana law, which required websites that publish “material harmful to minors” verify users’ ages, “creates a chilling effect on free speech.”

Vera Eidelman, a staff attorney with ACLU, tells The Verge that existing age verification laws “are going to face a very tough battle in court.” “For the most part, requiring content providers online to verify the ages of their users is almost certainly unconstitutional, given the likelihood but it will make people uncomfortable to exercise their rights to access certain information if they have to unmask or identify themselves,” Eidelman says.

But concerns over surveillance still haven’t stopped governments around the globe, including here in the US, from pushing ahead with online age verification mandates. There are currently several bills in the pipeline in Congress that are aimed at protecting children online, including the Protecting Kids on Social Media Act, which calls for the test of a national age verification system that would block users under the age of 13 from signing up for social media. In the UK, where the heavily delayed Online Safety Bill will likely become law, porn sites would be required to verify users’ ages, while other websites would be forced to give users the option to do so as well.

Some proponents of online safety laws say they’re no different than having to hand over an ID to purchase alcohol. “We have agreed as a society not to let a 15-year-old go to a bar or a strip club,” said Laurie Schlegel, the legislator behind Louisiana’s age restriction law, after its passage. “The same protections should be in place online.” But the comparison misses vastly different implications for free speech and privacy. “When we think about bars or ordering alcohol at a restaurant, we just assume that you can hand an ID to a bouncer or a waiter, they’ll hand it back, and that’s the end of it,” Venzke adds. “Problem is, there’s no infrastructure on the internet right now to [implement age verification] in a safe, secure, private way that doesn’t chill people’s ability to get to constitutionally protected speech.”

Most people also spend a relatively small amount of their time in real-world adults-only spaces, while social media and online communications tools are ubiquitous ways of finding information and staying in touch with friends and family. Even sites with sexually explicit content — the target of Louisiana’s bill — could be construed to include sites offering information about sexual health and LGBTQ resources, despite claims by lawmakers that this won’t happen.

Even if many of these rules are shot down, the way we use the internet may never be the same again. With age checks awaiting us online, some people may find themselves locked out of increasingly large numbers of platforms — leaving the online world more closed-off than ever.

Source: Online age verification is coming, and privacy is on the chopping block – The Verge

Google Will Require Android Apps to Make Account Deletion Easier

Right now, developers simply need to declare to Google that account deletion is somehow possible, but beginning next year, developers will have to make it easier to delete data through both their app and an online portal. Google specifies:

For apps that enable app account creation, developers will soon need to provide an option to initiate account and data deletion from within the app and online.

This means any app that lets you create an account to use it is required to allow you to delete that information when you’re done with it (or rather, request the developer delete the data from their servers). Although you can request that your data be deleted now, it usually requires manually contacting the developer to remove it. This new policy would mean developers have to offer a kill switch from the get-go rather than having Android users do the leg work.

The web deletion requirement is particularly new and must be “readily discoverable.” Developers must provide a link to a web form from the app’s Play Store landing page, with the idea being to let users delete account data even if they no longer have the app installed. Per the existing Android developer policy, all apps must declare how they collect and handle user data—Google introduced the policy in 2021 and made it mandatory last year. When you go into the Play Store and expand the “Data Safety” section under each app listing, developers list out data collection by criteria.

Simply removing an app from your Android device doesn’t completely scrub your data. Like software on a desktop operating system, files and folders are sometimes left behind from when the app was operating. This new policy will hopefully help you keep your data secure by wiping any unnecessary account info from the app developer’s servers, but also hopes to cut down on straggling data on your device. Conversely, you don’t have to delete your data if you think you’ll come to the app later. When it says you have a “choice,” Google wants to ensure it can point to something obvious.

It’s unclear how Google will determine if a developer follows the rules. It is up to the app developer to disclose whether user-specific app data is actually deleted. Earlier this year, Mozilla called out Google after discovering significant discrepancies between the top 20 most popular free apps’ internal privacy policies and those they listed in the Play Store.

https://gizmodo.com/google-android-delete-account-apps-request-uninstall-1850304540

Tesla Employees Have Been Meme-ing Your Private Car Videos

“We could see inside people’s garages and their private properties,” a former employee told Reuters. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.”

One office in particular, located in San Mateo, reportedly had a “free-wheeling” atmosphere, where employees would share videos and images with wild abandon. These pics or vids would often be “marked-up” via Adobe photoshop, former employees said, converting drivers’ personal experiences into memes that would circulate throughout the office.

“The people who buy the car, I don’t think they know that their privacy is, like, not respected,” one former employee was quoted as saying. “We could see them doing laundry and really intimate things. We could see their kids.”

Another former employee seemed to admit that all of this was very uncool: “It was a breach of privacy, to be honest. And I always joked that I would never buy a Tesla after seeing how they treated some of these people,” the employee told the news outlet. Yes, it’s always a vote of confidence when a company’s own employees won’t use the products that they sell.

Privacy concerns related to Tesla’s data-guzzling autos aren’t exactly new. Back in 2021, the Chinese government formally banned the vehicles on the premises of certain military installations, calling the company a “national security” threat. The Chinese were worried that the cars’ sensors and cameras could be used to funnel data out of China and back to the U.S. for the purposes of espionage. Beijing seems to have been on to something—although it might be the case that the spying threat comes less from America’s spooks than it does from bored slackers back at Tesla HQ.

One of the reasons that Tesla’s cameras seem so creepy is that you can never really tell if they’re on or not. A couple of years ago, a stationary Tesla helped catch a suspect in a Massachusetts hate crime, when its security system captured images of the man slashing tires in the parking lot of a predominantly Black church. The man was later arrested on the basis of the photos.

Reuters notes that it wasn’t ultimately “able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was.”

With all this in mind, you might as well always assume that your Tesla is watching, right? And, now that Reuters’ story has come out, you should also probably assume that some bored coder is also watchingpotentially in the hopes of converting your dopiest in-car moment into a meme.

https://gizmodo.com/tesla-elon-musk-car-camera-videos-employees-watching-1850307575

Wow, who knew? How surprising… not.

Tesla workers shared and memed sensitive images recorded by customer cars

Private camera recordings, captured by cars, were shared in chat rooms: ex-workers
Circulated clips included one of child being hit by car: ex-employees
Tesla says recordings made by vehicle cameras ‘remain anonymous’
One video showed submersible vehicle from James Bond film, owned by Elon Musk


LONDON/SAN FRANCISCO, April 6 (Reuters) – Tesla Inc assures its millions of electric car owners that their privacy “is and will always be enormously important to us.” The cameras it builds into vehicles to assist driving, it notes on its website, are “designed from the ground up to protect your privacy.”

But between 2019 and 2022, groups of Tesla employees privately shared via an internal messaging system sometimes highly invasive videos and images recorded by customers’ car cameras, according to interviews by Reuters with nine former employees.

Some of the recordings caught Tesla customers in embarrassing situations. One ex-employee described a video of a man approaching a vehicle completely naked.

Also shared: crashes and road-rage incidents. One crash video in 2021 showed a Tesla driving at high speed in a residential area hitting a child riding a bike, according to another ex-employee. The child flew in one direction, the bike in another. The video spread around a Tesla office in San Mateo, California, via private one-on-one chats, “like wildfire,” the ex-employee said.

Other images were more mundane, such as pictures of dogs and funny road signs that employees made into memes by embellishing them with amusing captions or commentary, before posting them in private group chats. While some postings were only shared between two employees, others could be seen by scores of them, according to several ex-employees.

Tesla states in its online “Customer Privacy Notice” that its “camera recordings remain anonymous and are not linked to you or your vehicle.” But seven former employees told Reuters the computer program they used at work could show the location of recordings – which potentially could reveal where a Tesla owner lived.

One ex-employee also said that some recordings appeared to have been made when cars were parked and turned off. Several years ago, Tesla would receive video recordings from its vehicles even when they were off, if owners gave consent. It has since stopped doing so.

“We could see inside people’s garages and their private properties,” said another former employee. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.”

Tesla didn’t respond to detailed questions sent to the company for this report.

About three years ago, some employees stumbled upon and shared a video of a unique submersible vehicle parked inside a garage, according to two people who viewed it. Nicknamed “Wet Nellie,” the white Lotus Esprit sub had been featured in the 1977 James Bond film, “The Spy Who Loved Me.”

The vehicle’s owner: Tesla Chief Executive Elon Musk, who had bought it for about $968,000 at an auction in 2013. It is not clear whether Musk was aware of the video or that it had been shared.

The submersible Lotus vehicle nicknamed “Wet Nellie” that featured in the 1977 James Bond film, “The Spy Who Loved Me,” and which Tesla chief executive Elon Musk purchased in 2013. Tim Scott ©2013 Courtesy of RM Sotheby’s
The submersible Lotus vehicle nicknamed “Wet Nellie” that featured in the 1977 James Bond film, “The Spy Who Loved Me,” and which Tesla chief executive Elon Musk purchased in 2013. Tim Scott ©2013 Courtesy of RM Sotheby’s
Musk didn’t respond to a request for comment.

To report this story, Reuters contacted more than 300 former Tesla employees who had worked at the company over the past nine years and were involved in developing its self-driving system. More than a dozen agreed to answer questions, all speaking on condition of anonymity.

Reuters wasn’t able to obtain any of the shared videos or images, which ex-employees said they hadn’t kept. The news agency also wasn’t able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was. Some former employees contacted said the only sharing they observed was for legitimate work purposes, such as seeking assistance from colleagues or supervisors.

https://www.reuters.com/technology/tesla-workers-shared-sensitive-images-recorded-by-customer-cars-2023-04-06/