US Court rules automakers can record and save owner text messages and call logs

A federal judge on Tuesday refused to bring back a class action lawsuit alleging four auto manufacturers had violated Washington state’s privacy laws by using vehicles’ on-board infotainment systems to record and intercept customers’ private text messages and mobile phone call logs.

The Seattle-based appellate judge ruled that the practice does not meet the threshold for an illegal privacy violation under state law, handing a big win to automakers Honda, Toyota, Volkswagen and General Motors, which are defendants in five related class action suits focused on the issue. One of those cases, against Ford, had been dismissed on appeal previously.

The plaintiffs in the four live cases had appealed a prior judge’s dismissal. But the appellate judge ruled Tuesday that the interception and recording of mobile phone activity did not meet the Washington Privacy Act’s standard that a plaintiff must prove that “his or her business, his or her person, or his or her reputation” has been threatened.

In an example of the issues at stake, plaintiffs in one of the five cases filed suit against Honda in 2021, arguing that beginning in at least 2014 infotainment systems in the company’s vehicles began downloading and storing a copy of all text messages on smartphones when they were connected to the system.

An Annapolis, Maryland-based company, Berla Corporation, provides the technology to some car manufacturers but does not offer it to the general public, the lawsuit said. Once messages are downloaded, Berla’s software makes it impossible for vehicle owners to access their communications and call logs but does provide law enforcement with access, the lawsuit said.

Many car manufacturers are selling car owners’ data to advertisers as a revenue boosting tactic, according to earlier reporting by Recorded Future News. Automakers are exponentially increasing the number of sensors they place in their cars every year with little regulation of the practice.

Source: Court rules automakers can record and intercept owner text messages

WhatsApp will let you hide your IP address from whoever you call

A new feature in WhatsApp will let you hide your IP address from whoever you call using the app. Knowing someone’s IP address can reveal a lot of personal information such as their location and internet service provider, so having the option to hide it is a major privacy win. “This new feature provides an additional layer of privacy and security geared towards our most privacy-conscious users,” WhatsApp wrote in a blog post.

WhatsApp currently relays calls either through its own servers or by establishing a direct connection called peer-to-peer with whoever you are calling depending on network conditions. Peer-to-peer calls often provide better voice quality, but require both devices to know each other’s IP addresses.

Once you turn the new feature, known simply as “Protect IP address in calls” on, however, WhatsApp will always relay your calls through its own servers rather than establishing a peer-to-peer connection, even if it means a slight hit to sound quality. All calls will continue to remain end-to-end encrypted, even if they go through WhatsApp’s servers, the company said.

WhatsApp has been adding more privacy features over the last few months. In June, the company added a feature that let people automatically silence unknown callers. It also introduced a “Privacy Checkup” section to allow users to tune up a host of privacy settings from a single place in the app, and earlier this year, added a feature that lets people lock certain chats with a fingerprint or facial recognition.

Source: WhatsApp will let you hide your IP address from whoever you call

So this means that Meta / Facebook / Whatsapp will now know who you are calling with, once you turn this privacy feature on. So to gain some privacy towards the end caller, you sacrifice privacy towards Meta.

In other news, it’s easy to find the IP address of someone you are whatsapping with

EU Commission’s nameless experts behind its “spy on all EU citizens” *cough* “child sexual abuse” law

The EU Ombudsman has found a case of maladministration in the European Commission’s refusal to provide the list of experts, which it first denied existing, with whom they worked together in drafting the regulation to detect and remove online child sexual abuse material.

Last December, the Irish Council for Civil Liberties (ICCL) filed complaints to the European Ombudsman against the European Commission for refusing to provide the list of external experts involved in drafting the regulation to detect and remove online child sexual abuse material (CSAM).

Consequently, the Ombudsman concluded that “the Commission’s failure to identify the list of experts as falling within the scope of the complainant’s public access request constitutes maladministration”.

The EU watchdog also slammed the Commission for not respecting the deadlines for handling access to document requests, delays that have become somewhat systematic.

The Commission told the Ombudsman inquiry team during a meeting that the requests by the ICCL “seemed to be requests to justify a political decision rather than requests for public access to a specific set of documents”.

The request was about getting access to the list of experts the Commission was in consultations with and who also participated in meetings with the EU Internet Forum, which took place in 2020, according to an impact assessment report dated 11 May 2022.

The main political groups of the EU Parliament reached an agreement on the draft law to prevent the dissemination of online child sexual abuse material (CSAM) on Tuesday (24 October).

The list of experts was of public interest because independent experts have stated on several occasions that detecting CSAM in private communications without violating encryption would be impossible.

The Commission, however, suggested otherwise in their previous texts, which has sparked controversy ever since the introduction of the file last year.

During the meetings, “academics, experts and companies were invited to share their perspectives on the matter as well as any documents that could be valuable for the discussion.”

Based on these discussions, and both oral and written inputs, an “outcome document” was produced, the Commission said.

According to a report about the meeting between the Commission and the Ombudsman, this “was the only document that was produced in relation to these workshops.”

The phantom list

While a list of participants does exist, it was not disclosed “for data protection and public security reasons, given the nature of the issues discussed”, the Commission said, according to the EU Ombudsman.

Besides security reasons, participants were also concerned about their public image, the Commission told the EU Ombudsman, adding that “disclosure could be exploited by malicious actors to circumvent detection mechanisms and moderation efforts by companies”.

Moreover, “revealing some of the strategies and tactics of companies, or specific technical approaches also carries a risk of informing offenders on ways to avoid detection”.

However, the existence of this list was at first denied by the Commission.

Kris Shrishak, senior fellow at the Irish Council for Civil Liberties, told Euractiv that the Commission had told him that no such list exists. However, later on, he was told by the EU Ombudsman that that was not correct since they found a list of experts.

The only reason the ICCL learned that there is a list is because of the Ombudsman, Shrishak emphasised.

Previously, the Commission said there were email exchanges about the meetings, which contained only the links to the online meetings.

“Following the meeting with the Ombudsman inquiry team, the Commission tried to retrieve these emails” but since they were more than two years old at the time, “they had already been deleted in line with the Commission’s retention policy” and were “not kept on file”.

Euractiv reached out to the European Commission for a comment but did not get a response by the time of publication.

Source: EU Commission’s nameless experts behind its child sexual abuse law – EURACTIV.com

This law is an absolute travesty – it’s talking about the poor children (how can we not protect them!) whilst being a wholesale surveillance law being put in by nameless faces and unelected officials.

See also: EU Trys to Implement Client-Side Scanning, death to encryption By Personalised Targeting of EU Residents With Misleading Ads

They basically want to spy on all electronic signals. All of them. Without a judge.

Researchers posed as foreign actors, and data brokers sold them information on military servicemembers anyway – for pennies

[…]

Researchers at Duke University released a study on Monday tracking what measures data brokers have in place to prevent unidentified or potentially malign actors from buying personal data on members of the military. As it turns out, the answer is often few to none — even when the purchaser is actively posing as a foreign agent.

A 2021 Duke study by the same lead researcher revealed that data brokers advertised that they had access to — and were more than happy to sell —information on US military personnel. In this more recent study researchers used wiped computers, VPNs, burner phones bought with cash and other means of identity obfuscation to go undercover. They scraped the websites of data brokers to see which were likely to have available data on servicemembers. Then they attempted to make those purchases, posing as two entities: datamarketresearch.org and dataanalytics.asia. With little-or-no vetting, several of the brokers transferred the requested data not only to the presumptively Chicago-based datamarketresearch, but also to the server of the .asia domain which was located in Singapore. The records only cost between 12 to 32 cents a piece.

The sensitive information included health records and financial information. Location data was also available, although the team at Duke decided not to purchase that — though it’s not clear if this was for financial or ethical reasons. “Access to this data could be used by foreign and malicious actors to target active-duty military personnel, veterans, and their families and acquaintances for profiling, blackmail, targeting with information campaigns, and more,” the report cautions. At an individual level, this could also include identity theft or fraud.

This gaping hole in our national security apparatus is due in large part to the absence of comprehensive federal regulations governing either individual data privacy, or much of the business practices engaged in by data brokers. Senators Elizabeth Warren, Bill Cassidy and Marco Rubio introduced the Protecting Military Service Members’ Data Act in 2022 to give power to the Federal Trade Commission to prevent data brokers from selling military personnel information to adversarial nations. They reintroduced the bill in March 2023 after it stalled out. Despite bipartisan support, it still hasn’t made it past the introduction phase.

Source: Researchers posed as foreign actors, and data brokers sold them information on military servicemembers anyway

YouTube cares less for your privacy than its revenues

YouTube wants its pound of flesh. Disable your ad blocker or pay for Premium, warns a new message being shown to an unsuspecting test audience, with the barely hidden subtext of “you freeloading scum.” Trouble is, its ad blocker detecting mechanism doesn’t exactly comply with EU law, say privacy activists. Ask for user permission or taste regulatory boot. All good clean fun.

Privacy advocate challenges YouTube’s ad blocking detection scripts under EU law

READ MORE

Only it isn’t. It’s profoundly depressing. The battleground between ad tech and ad blockers has been around so long that in the internet’s time span it’s practically medieval. In 2010, Ars Technica started blocking ad blockers; in under a day, the ad blocker blocker was itself blocked by the ad blockers. The editor then wrote an impassioned plea saying that ad blockers were killing online journalism. As the editor ruefully notes, people weren’t using blockers because they didn’t care about the good sites, it was because so much else of the internet was filled with ad tech horrors.

Nothing much has changed. If your search hit ends up with an “ERROR: Ad blocker detected. Disable it to access this content” then it’s browser back button and next hit down, all day, every day. It’s like running an app that asks you to disable your firewall; that app is never run again. Please disable my ad blocker? Sure, if you stop pushing turds through my digital letterbox.

The reason YouTube has been dabbling with its own “Unblock Or Eff Off” strategy instead of bringing down the universal banhammer is that it knows how much it will upset the balance of the ecosystem. That it’s had to pry deep enough into viewers’ browsers to trigger privacy laws shows just how delicate that balance is. It’s unstable because it’s built on bad ideas.

In that ecosystem of advertisers, content consumers, ad networks, and content distributors, ad blockers aren’t the disease, they’re the symptom. Trying to neutralize a symptom alone leaves the disease thriving while the host just gets sicker. In this case, the disease isn’t cynical freeloading by users, it’s the basic dishonesty of online advertising. It promises things to advertisers that it cannot deliver, while blocking better ways of working. It promises revenue to content providers while keeping them teetering on the brink of unviability, while maximizing its own returns. Google has revenues in the hundreds of billions of dollars, while publishers struggle to survive, and users have to wear a metaphorical hazmat suit to stay sane. None of this is healthy.

Content providers have to be paid. We get that. Advertising is a valid way of doing that. We get that too. Advertisers need to reach audiences. Of course they do. But like this? YouTube needs its free, ad-supported model, or it would just force Premium on everyone, but forcing people to watch adverts will not force them to pony up for what’s being advertised.

The pre-internet days saw advertising directly support publishers who knew how to attract the right audiences who would respond well to the right adverts. Buy a computer magazine and it would be full of adverts for computer stuff – much of which you’d actually want to look at. The publisher didn’t demand you have to see ads for butter or cars or some dodgy crypto. That model has gone away, which is why we need ad blockers.

YouTube’s business model is a microcosm of the bigger ad tech world, where it basically needs to spam millions to generate enough results for its advertisers. It cannot stomach ad blockers, but it can’t neutralize them technically or legally. So it should treat them like the cognitive firewalls they are. If YouTube developed ways to control what and how adverts appeared back into the hands of its content providers and viewers, perhaps we’d tell our ad blockers to leave YouTube alone – punch that hole through the firewall for the service you trust. We’d get to keep blocking things that needed to be blocked, content makers could build their revenues by making better content, and advertisers would get a much better return on their ad spend.

Of course, this wouldn’t provide the revenues to YouTube or the ad tech business obtainable by being spammy counterfeits of responsible companies with a lock on the market. That a harmful business model makes a shipload of money does not make it good, in fact quite the reverse.

So, to YouTube we say: you appear to be using a bad lock-in. Disable it, or pay the price

Source: YouTube cares less for your privacy than its revenues • The Register

EU Trys to Implement Client-Side Scanning, death to encryption By Personalised Targeting of EU Residents With Misleading Ads

The EU Commission has been pushing client-side scanning for well over a year. This new intrusion into private communications has been pitched as perhaps the only way to prevent the sharing of child sexual abuse material (CSAM).

Mandates proposed by the EU government would have forced communication services to engage in client-side scanning of content. This would apply to every communication or service provider. But it would only negatively affect providers incapable of snooping on private communications because their services are encrypted.

Encryption — especially end-to-end encryption — protects the privacy and security of users. The EU’s pitch said protecting more than the children was paramount, even if it meant sacrificing the privacy and security of millions of EU residents.

Encrypted services would have been unable to comply with the mandate without stripping the client-side end from their end-to-end encryption. So, while it may have been referred to with the legislative euphemism “chat control” by EU lawmakers, the reality of the situation was that this bill — if passed intact — basically would have outlawed E2EE.

Fortunately, there was a lot of pushback. Some of it came from service providers who informed the EU they would no longer offer their services in EU member countries if they were required to undermine the security they provided for their users.

The more unexpected resistance came from EU member countries who similarly saw the gaping security hole this law would create and wanted nothing to do with it. On top of that, the EU government’s own lawyers told the Commission passing this law would mean violating other laws passed by this same governing body.

This pushback was greeted by increasingly nonsensical assertions by the bill’s supporters. In op-eds and public statements, backers insisted everyone else was wrong and/or didn’t care enough about the well-being of children to subject every user of any communication service to additional government surveillance.

That’s what happened on the front end of this push to create a client-side scanning mandate. On the back end, however, the EU government was trying to dupe people into supporting their own surveillance with misleading ads that targeted people most likely to believe any sacrifice of their own was worth making when children were on the (proverbial) line.

That’s the unsettling news being delivered to us by Vas Panagiotopoulos for Wired. A security researcher based in Amsterdam took a long look at apparently misleading ads that began appearing on Twitter as the EU government amped up its push to outlaw encryption.

Danny Mekić was digging into the EU’s “chat control” law when he began seeing disturbing ads on Twitter. These ads featured young women being (apparently) menaced by sinister men, backed by a similarly dark background and soundtrack. The ads displayed some supposed “facts” about the sexual abuse of children and ended with the notice that the ads had been paid for by the EU Commission.

The ads also cited survey results that supposedly said most European citizens supported client-side scanning of content and communications, apparently willing to sacrifice their own privacy and security for the common good.

But Mekić dug deeper and discovered the cited survey wasn’t on the level.

Following closer inspection, he discovered that these findings appeared biased and otherwise flawed. The survey results were gathered by misleading the participants, he claims, which in turn may have misled the recipients of the ads; the conclusion that EU citizens were fine with greater surveillance couldn’t be drawn from the survey, and the findings clashed with those of independent polls.

This discovery prompted Mekić to dig even deeper. What Mekić found was that the ads were very tightly targeted — so tightly targeted, in fact, that they could not have been deployed in this manner without violating European laws that are aimed to prevent exactly this sort of targeting, i.e. by using “sensitive data” like religious beliefs and political affiliations.

The ads were extremely targeted, meant to find people most likely to be swayed towards the EU Commission’s side, either because the targets never appeared to distrust their respective governments or because their governments had yet to tell the EU Commission to drop its proposed anti-encryption proposal.

Mekić found that the ads were meant to be seen by select targets, such as top ministry officials, while they were concealed from people interested in Julian Assange, Brexit, EU corruption, Eurosceptic politicians (Marine Le Pen, Nigel Farage, Viktor Orban, Giorgia Meloni), the German right-wing populist party AfD, and “anti-Christians.”

Mekić then found out that the ads, which have garnered at least 4 million views, were only displayed in seven EU countries: the Netherlands, Sweden, Belgium, Finland, Slovenia, Portugal, and the Czech Republic.

A document leaked earlier this year exposed which EU members were in favor of client-side scanning and its attendant encryption backdoors, as well as those who thought the proposed mandate was completely untenable.

The countries targeted by the EU Commission ad campaign are, for the most part, supportive of/indifferent to broken encryption, client-side scanning, and expanded surveillance powers. Slovenia (along with Spain, Cyprus, Lithuania, Croatia, and Hungary) were all firmly in favor of bringing an end to end-to-end encryption.

[…]

While we’re accustomed to politicians airing misleading ads during election runs, this is something different. This is the representative government of several nations deliberately targeting countries and residents it apparently thinks might be receptive to its skewed version of the facts, which comes in the form of the presentation of misleading survey results against a backdrop of heavily-implied menace. And that’s on top of seeming violations of privacy laws regarding targeted ads that this same government body created and ratified.

It’s a tacit admission EU proposal backers think they can’t win this thing on its merits. And they can’t. The EU Commission has finally ditched its anti-encryption mandates after months of backlash. For the moment, E2EE survives in Europe. But it’s definitely still under fire. The next exploitable tragedy will bring with it calls to reinstate this part of the “chat control” proposal. It will never go away because far too many governments believe their citizens are obligated to let these governments shoulder-surf whenever they deem it necessary. And about the only thing standing between citizens and that unceasing government desire is end-to-end encryption.

Source: EU Pitched Client-Side Scanning By Targeting Certain EU Residents With Misleading Ads | Techdirt

As soon as you read that legislation is ‘for the kids’ be very very wary – as it’s usually for something completely beyond that remit. And this kind of legislation is the installation of Big Brother on every single communications line you use.

Drugmakers Are Set To Pay 23andMe Millions To Access Your DNA – which is also your families DNA

GSK will pay 23andMe $20 million for access to the genetic-testing company’s vast trove of consumer DNA data, extending a five-year collaboration that’s allowed the drugmaker to mine genetic data as it researches new medications.

Under the new agreement, 23andMe will provide GSK with one year of access to anonymized DNA data from the approximately 80% of gene-testing customers who have agreed to share their information for research, 23andMe said in a statement Monday. The genetic-testing company will also provide data-analysis services to GSK.

23andMe is best known for its DNA-testing kits that give customers ancestry and health information. But the DNA it collects is also valuable, including for scientific research. With information from more than 14 million customers, the only data sets that rival the size of the 23andMe library belong to Ancestry.com and the Chinese government. The idea for drugmakers is to comb the data for hints about genetic pathways that might be at the root of disease, which could significantly speed up the long, slow process of drug development. GSK and 23andMe have already taken one potential medication to clinical trials: a cancer drug that works to block CD96, a protein that helps modulate the body’s immune responses. It entered that testing phase in four years, compared to an industry average of about seven years. Overall, the partnership between GSK and 23andMe has produced more than 50 new drug targets, according to the statement.

The new agreement changes some components of the collaboration. Any discoveries GSK makes with the 23andMe data will now be solely owned by the British pharmaceutical giant, while the genetic-testing company will be eligible for royalties on some projects. In the past, the two companies pursued new drug targets jointly. GSK’s new deal with 23andMe is also non-exclusive, leaving the genetic-testing company free to license its database to other drugmakers.

Source: Drugmakers Are Set To Pay 23andMe Millions To Access Consumer DNA – Slashdot

So – you paid for a DNA test and it turns out you didn’t think of the privacy aspect at all. Neither did you think up that you gave up your families DNA. Or that you can’t actually change your DNA either. Well done. It’s being spread all over the place. And no, the data is not anonymous – DNA is the most personal information you can give up ever.

Apple’s MAC Address Privacy Feature Has Never Worked

Ever since Apple re-branded as the “Privacy” company several years back, it’s been rolling out features designed to show its commitment to protecting users. Yet while customers might feel safer using an iPhone, there’s already plenty of evidence that Apple’s branding efforts don’t always match the reality of its products. In fact, a lot of its privacy features don’t actually seem to work.

Case in point: new research shows that one of Apple’s proffered privacy tools—a feature that was supposed to anonymize mobile users’ connections to Wifi—is effectively “useless.” In 2020, Apple debuted a feature that, when switched on, was supposed to hide an iPhone user’s media access control—or MAC—address. When a device connects to a WiFi network, it must first send out its MAC address so the network can identify it; when the same MAC address pops up in network after network, it can be used to by network observers to identify and track a specific mobile user’s movements.

Apple’s feature was supposed to provide randomized MAC addresses for users as a way of stop this kind of tracking from happening. But, apparently, a bug in the feature persisted for years that made the feature effectively useless.

According to a new report from Ars Technica, researchers recently tested the feature to see if it actually concealed their MAC addresses, only to find that it didn’t do that at all. Ars writes:

Despite promises that this never-changing address would be hidden and replaced with a private one that was unique to each SSID, Apple devices have continued to display the real one, which in turn got broadcast to every other connected device on the network.

One of the researchers behind the discovery of the vulnerability, Tommy Mysk, told Ars that, from the jump, “this feature was useless because of this bug,” and that, try as they might, he “couldn’t stop the devices from sending these discovery requests, even with a VPN. Even in the Lockdown Mode.”

What Apple’s justification for advertising a feature that just plainly does not work is, I’m not sure. Gizmodo reached out to the company for comment and will update this story if they respond. A recent update, iOS 17.1, apparently patches the problem and ensures that the feature actually works.

Source: Apple’s MAC Address Privacy Feature Has Never Worked

Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework

The Data Privacy Framework (DPF) presents new legal guidance to facilitate personal data sharing between US companies and their counterparts in the EU and the UK. This framework empowers individuals with greater control over their personal data and streamlines business operations by creating common rules around interoperable dataflows. Moreover, the DPF will help enable clear contract terms and business codes of conduct for corporations that collect, use, and transfer personal data across borders.

Any business that collects data related to people in the EU must comply with the EU’s General Data Protection Regulation (GDPR), which is the toughest privacy and security law across the globe. Thus, the DPF helps US corporations avoid potentially hefty fines and penalties by ensuring their data transfers align with GDPR regulations.

Data transfer procedures, which were historically time-consuming and riddled with legal complications, are now faster and more straightforward with the DPF, which allows for more transatlantic dataflows agreed on by US companies and their EU and UK counterparts. On July 10, 2023, the European Commission finalized an adequacy decision that assures the US offers data protection levels similar to the EU’s.

[…]

US companies can register with the DPF through the Department of Commerce DPF website. Companies that previously self-certified compliance with the EU-US Privacy Shield can transition to DPF by recertifying their adherence to DPF principles, including updating privacy policies to reflect any change in procedures and data subject rights that are crucial for this transition. Businesses should develop privacy policies that identify an independent recourse mechanism that can address data protection concerns. To qualify for the DPF the company must fall under the jurisdiction of either the Federal Trade Commission or the US Department of Transportation, though this reach may broaden in the future.

Source: Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework | American Enterprise Institute – AEI

The whole self-certification things seems leaky as a sieve to me… And once data has gone into the US intelligence services you can assume it will go everywhere and there will be no stopping it from the EU side.

Equifax poked with paltry $13.4 million following 147m customer data breach in 2017

Credit bureau company, Equifax, has been fined US$13.4 million by The Financial Conduct Authority (FCA), a UK financial watchdog, following its involvement in “one of the largest” data breaches ever.

This cyber security incident took place in 2017 and saw Equifax’s US-based parent company, Equifax Inc., suffer a data breach that saw the personal data of up to 147.9 million customers accessed by malicious actors during the hack. The FCA also revealed that, as this data was stored in company servers in the US, the hack also exposed the personal data of 13.8 million UK customers.

The data accessed during the hack included Equifax membership login details, customer names, dates of birth, partial credit card details and addresses.

According the FCA, the cyber attack and subsequent data breach was “entirely preventable” and exposed UK customers to financial crime.
“There were known weaknesses in Equifax Inc’s data security systems and Equifax failed to take appropriate action in response to protect UK customer data,” the FCA explained.

The authority also noted that the UK arm of Equifax was not made aware that malicious actors had been accessed during the hack until six weeks after the cyber security incident was discovered by Equifax Inc.

The company was fined $60,727 by the British Information Commissioner’s Office (ICO) relating to the data breach in 2018.

On October 13th, Equifax stated that it had fully cooperated with the FCA during the investigation, which has been extensive. The FCA also said that the fine levelled at Equifax Inc had been reduced following the company’s agreement to cooperate with the watchdog and resolve the cyber attack.

Patricio Remon, president for Europe at Equifax, said that since the cyber attack against Equifax in 2017, the company has “invested over $1.5 billion in a security and technology transformation”. Remon also said that “few companies have invested more time and resources than Equifax to ensure that consumers’ information is protected”.

Source: Equifax fined $13.4 million following data breach

ICE, CBP, Secret Service All Illegally Used Smartphone Location Data

In a bombshell report, an oversight body for the Department of Homeland Security (DHS) found that Immigration and Customs Enforcement (ICE), Customs and Border Enforcement (CBP), and the Secret Service all broke the law while using location data harvested from ordinary apps installed on smartphones. In one instance, a CBP official also inappropriately used the technology to track the location of coworkers with no investigative purpose. For years U.S. government agencies have been buying access to location data through commercial vendors, a practice which critics say skirts the Fourth Amendment requirement of a warrant. During that time, the agencies have typically refused to publicly explain the legal basis on which they based their purchase and use of the data. Now, the report shows that three of the main customers of commercial location data broke the law while doing so, and didn’t have any supervisory review to ensure proper use of the technology. The report also recommends that ICE stop all use of such data until it obtains the necessary approvals, a request that ICE has refused.

The report, titled “CBP, ICE, and Secret Service Did Not Adhere to Privacy Policies or Develop Sufficient Policies Before Procuring and Using Commercial Telemetry Data,” is dated September 28, 2023, and comes from Joseph V. Cuffari, the Inspector General for DHS. The report was originally marked as “law enforcement sensitive,” but the Inspector General has now released it publicly.

Source: ICE, CBP, Secret Service All Illegally Used Smartphone Location Data – Slashdot

EPIC urges FTC to investigate Grindr’s data practices

On Wednesday, EPIC filed a complaint with the US government watchdog over Grindr’s “apparent failure to safeguard users’ sensitive personal data.” This includes both present and past users who have since deleted their accounts, according to the complaint. Despite promising in its privacy policy to delete personal info if customers remove their account, Grindr allegedly retained and disclosed some of this data to third parties.

Considering that people trust the dating app with a ton of very sensitive information — this includes their sexual preferences, self-reported HIV status, chat history, photos including nudes, and location information — “learning that Grindr breaks the promises it makes to users would likely affect a consumer’s decision regarding whether to use Grindr,” the complaint states [PDF].

Grindr, for its part, says privacy is of the uppermost importance to it, and that these “unfounded” claims stem from allegations made by a disgruntled ex-worker. So that’s all right then.

“Privacy is a top priority for Grindr and the LGBTQ+ community we serve, and we have adopted industry-leading privacy practices and tools to protect and empower our users,” a spokesperson told The Register.

“We are sorry that the former employee behind the unfounded allegations in today’s request is dissatisfied with his departure from the company; we wish him the best.”

The former employee in question is Grindr’s ex-chief privacy officer Ron De Jesus. In June, De Jesus filed a wrongful termination lawsuit [PDF] against his former bosses that also accused the dating app of violating privacy laws.

According to the lawsuit, De Jesus was “leading the charge to keep Grindr compliant with state, national, and international laws” after Norway’s data protection agency fined the dating app biz about $12 million in December 2021 and a Wall Street Journal article in May 2022 accused the application developer of selling users’ location data.

But despite De Jesus’ attempts, “Grindr placed profit over privacy and got rid of Mr De Jesus for his efforts and reports,” the lawsuit alleges.

EPIC’s complaint, which highlights De Jesus’ allegations, asks the FTC to look into potential violations of privacy law, including detection data retention and disclosure practices.

It also accuses Grindr of violating the Health Breach Notification Rule (HNBR). The dating app is subject to the HNBR because it asks users to self-report health data including HIV status, last-tested date, and vaccination status. By sharing these records with third parties and retaining health data after users deleted their accounts, Grindr allegedly breached the HNBR, EPIC says.

The privacy advocates at EPIC want the FTC to make Grindr comply with the laws and stop any “unlawful or impermissible” data retention practices. Additionally, the complaint calls on the federal agency to force Grindr to notify any users’ whose data was misused, and impose fines against the dating app for any violations of the HBNR.

Source: EPIC urges FTC to investigate Grindr’s data practices • The Register

Singapore plans to scan your face instead of your passport

[…] “Singapore will be one of the first few countries in the world to introduce automated, passport-free immigration clearance,” said minister for communications and information Josephine Teo in a wrap-up speech for the bill. Teo did concede that Dubai had such clearance for select enrolled travelers, but there was no assurance of other countries planning similar actions.

[…]

Another consideration for why passports will likely remain relevant in Singapore airports is for checking in with airlines. Airlines check passports not just to confirm identity, but also visas and more. Airlines are often held responsible for stranded passengers so will likely be required to confirm travelers have the documentation required to enter their destination.

The Register asked Singapore Airlines to confirm if passports will still be required on the airline after the implementation of biometric clearance. They deferred to Changi’s operator, Changi Airport Group (CAG), which The Reg also contacted – and we will update if a relevant reply arises.

What travelers will see is an expansion of a program already taking form. Changi airport currently uses facial recognition software and automated clearance for some parts of immigration.

[…]

Passengers who pre-submit required declarations online can already get through Singapore’s current automated immigration lanes in 20 to 30 seconds once they arrive to the front of the queue. It’s one reason Changi has a reputation for being quick to navigate.

[…]

According to CAG, the airport handled 5.12 million passenger movements in June 2023 alone. This figure is expected to only increase as it currently stands at 88 percent of pre-COVID levels and the government sees such efficiency as critical to managing the impending growth.

But the reasoning for biometric clearance go beyond a boom in travelers. With an aging population and shrinking workforce, Singapore’s Immigration & Checkpoints Authority (ICA) will have “to cope without a significant increase in manpower,” said Teo.

Additionally, security threats including pandemics and terrorism call for Singapore to “go upstream” on immigration measures, “such as the collection of advance passenger and crew information, and entry restrictions to be imposed on undesirable foreigners, even before they arrive at our shores,” added the minister.

This collection and sharing of biometric information is what enables the passport-free immigration process – passenger and crew information will need to be disclosed to the airport operator to use for bag management, access control, gate boarding, duty-free purchases, as well as tracing individuals within the airport for security purposes.

The shared biometrics will serve as a “single token of authentication” across all touch points.

Members of Singapore’s parliament have raised concerns about shifting to universal automated clearance, including data privacy, and managing technical glitches.

According to Teo, only Singaporean companies will be allowed ICA-related IT contracts, vendors will be given non-disclosure agreements, and employees of such firms must undergo security screening. Traveler data will be encrypted and transported through data exchange gateways.

As for who will protect the data, that role goes to CAG, with ICA auditing its compliance.

In case of disruptions that can’t be handled by an uninterruptible power supply, off-duty officers will be called in to go back to analog.

And even though the ministry is pushing universal coverage, there will be some exceptions, such as those who are unable to provide certain biometrics or are less digitally literate. Teo promised their clearance can be done manually by immigration officers.

Source: Singapore plans to scan your face instead of your passport • The Register

Data safety is a real issue here – how long will the data be collected and for what other purposes will it be used?

Firefox now has private browser-based website translation – no cloud servers required

Web browsers have had tools that let you translate websites for years. But they typically rely on cloud-based translation services like Google Translate or Microsoft’s Bing Translator.

The latest version of Mozilla’s Firefox web browser does things differently. Firefox 118 brings support for Fullpage Translation, which can translate websites entirely in your browser. In other words, everything happens locally on your computer without any data sent to Microsoft, Google, or other companies.

Here’s how it works. Firefox will notice when you visit a website in a supported language that’s different from your default language, and a translate icon will show up in the address bar.

Tap that icon and you’ll see a pop-up window that asks what languages you’d like to translate from and to. If the browser doesn’t automatically detect the language of the website you’re visiting, you can set these manually.

Then click the “Translate” button, and a moment later the text on the page should be visible in your target language. If you’d prefer to go back to the original language, just tap the translate icon again and choose the option that says “show original.”

You can also tap the settings icon in the translation menu and choose to “always translate” or “never translate” a specific language so that you won’t have to manually invoke the translation every time you visit sites in that language.

Now for the bad news: Firefox Fullpage Translation only supports 9 languages so far:

  • Bulgarian
  • Dutch
  • English
  • French
  • German
  • Italian
  • Polish
  • Portuguese
  • Spanish

[…]

Source: Firefox 118 brings browser-based website translation (no cloud servers required… for a handful of supported languages) – Liliputing

Philips Hue / Signify Ecosystem: ‘Collapsing Into Stupidity’

The Philips Hue ecosystem of home automation devices is “collapsing into stupidity,” writes Rachel Kroll, veteran sysadmin and former production engineer at Facebook. “Unfortunately, the idiot C-suite phenomenon has happened here too, and they have been slowly walking down the road to full-on enshittification.” From her blog post: I figured something was up a few years ago when their iOS app would block entry until you pushed an upgrade to the hub box. That kind of behavior would never fly with any product team that gives a damn about their users — want to control something, so you start up the app? Forget it, we are making you placate us first! How is that user-focused, you ask? It isn’t.

Their latest round of stupidity pops up a new EULA and forces you to take it or, again, you can’t access your stuff. But that’s just more unenforceable garbage, so who cares, right? Well, it’s getting worse.

It seems they are planning on dropping an update which will force you to log in. Yep, no longer will your stuff Just Work across the local network. Now it will have yet another garbage “cloud” “integration” involved, and they certainly will find a way to make things suck even worse for you. If you have just the lights and smart outlets, Kroll recommends deleting the units from the Hue Hub and adding them to an IKEA Dirigera hub. “It’ll run them just fine, and will also export them to HomeKit so that much will keep working as well.” That said, it’s not a perfect solution. You will lose motion sensor data, the light level, the temperature of that room, and the ability to set custom behaviors with those buttons.

“Also, there’s no guarantee that IKEA won’t hop on the train to sketchville and start screwing over their users as well,” adds Kroll.

Source: Is the Philips Hue Ecosystem ‘Collapsing Into Stupidity’? – Slashdot

Philips Hue will force users to upload their data to Hue cloud – changing their TOS after you bought the product for not needing an account

Today’s story is about Philips Hue by Signify. They will soon start forcing accounts on all users and upload user data to their cloud. For now, Signify says you’ll still be able to control your Hue lights locally as you’re currently used to, but we don’t know if this may change in the future. The privacy policy allows them to store the data and share it with partners.

[…]

When you open the Philips Hue app you will now be prompted with a new message: Starting soon, you’ll need to be signed in.

[…]

So today, you can choose to not share your information with Signify by not creating an account. But this choice will soon be taken away and all users need to share their data with Philips Hue.

Confirming the news

I didn’t want to cry wolf, so I decided to verify the above statement with Signify. They sadly confirmed:

Twitter conversation with Philips Hue (source: Twitter)

The policy they are referring to is their privacy policy (April 2023 edition, download version).

[…]

When asked what drove this change, the answer is the usual: security. Well Signify, you know what keeps user data even more secure? Not uploading it all to your cloud.

[…]

As a user, we encourage you to reach out to Signify support and voice your concern.

NOTE: Their support form doesn’t work. You can visit their Facebook page though

Dear Signify, please reconsider your decision and do not move forward with it. You’ve reversed bad decisions before. People care about privacy and forcing accounts will hurt the brand in the long term. The pain caused by this is not worth the gain.

Source: Philips Hue will force users to upload their data to Hue cloud

No, Philips / Signify – I have used these devices for years without having to have an account or be connected to the internet. It’s one of the reasons I bought into Hue. Making us give up data to use something we bought after we bought it is a dangerous decision considering the private and exploitable nature of the data, as well as greedy and rude.

T-Mobile US exposes some customer data, but don’t say breach

T-Mobile US has had another bad week on the infosec front – this time stemming from a system glitch that exposed customer account data, followed by allegations of another breach the carrier denied.

According to customers who complained of the issue on Reddit and X, the T-Mobile app was displaying other customers’ data instead of their own – including the strangers’ purchase history, credit card information, and address.

This being T-Mobile’s infamously leaky US operation, people immediately began leaping to the obvious conclusion: another cyber attack or breach.

“There was no cyber attack or breach at T-Mobile,” the telco assured us in an emailed statement. “This was a temporary system glitch related to a planned overnight technology update involving limited account information for fewer than 100 customers, which was quickly resolved.”

Note, as Reddit poster Jman100_JCMP did, T-Mobile means fewer than 100 customers had their data exposed – but far more appear to have been able to view those 100 customers’ data.

As for the breach, the appearance of exposed T-Mobile data was alleged by malware repository vx-underground’s X (Twitter) account. The Register understands T-Mobile examined the data and determined that independently owned T-Mobile dealer, Connectivity Source, was the source – resulting from a breach it suffered in April. We understand T-Mobile believes vx-underground misinterpreted a data dump.

Connectivity Source was indeed the subject of a breach in April, in which an unknown attacker made off with employee data including names and social security numbers – around 17,835 of them from across the US, where Connectivity appears to do business exclusively as a white-labelled T-Mobile US retailer.

Looks like the carier really dodged the bullet on this one – there’s no way Connectivity Source employees could be mistaken for its own staff.

T-Mobile US has already experienced two prior breaches this year, but that hasn’t imperilled the biz much – its profits have soared recently and some accompanying sizable layoffs will probably keep things in the black for the foreseeable future.

Source: T-Mobile US exposes some customer data, but don’t say breach • The Register

Dutch privacy watchdog SDBN sues twitter for collecting and selling data via Mohub (wordfeud, duolingo, etc) without notifying users

The Dutch Data Protection Foundation (SDBN) wants to enforce a mass claim for 11 million people through the courts against social media company X, the former Twitter. Between 2013 and 2021, that company owned the advertising platform MoPub, which, according to the privacy foundation, illegally traded in data from users of more than 30,000 free apps such as Wordfeud, Buienradar and Duolingo.

SDBN has been trying to reach an agreement with X since November last year, but according to the foundation, without success. That is why SDBN is now starting a lawsuit at the Rotterdam court. Central to this is MoPub’s handling of personal data such as religious beliefs, sexual orientation and health. In addition to compensation, SDBN wants this data to be destroyed.

The foundation also believes that users are entitled to profit contributions. A lot of money can be made by sharing personal data with thousands of companies, says SDBN chairman Anouk Ruhaak. Although she says it is difficult to find out exactly which companies had access to the data. “By holding X. Corp liable, we hope not only to obtain compensation for all victims, but also to put a stop to this type of practice,” said Ruhaak. “Unfortunately, these types of companies often only listen when it hurts financially.”

Source: De Ondernemer | Privacystichting SDBN wil via rechter massaclaim bij…

Join the claim here

Google Chrome’s Privacy Sandbox: any site can now query all your habits

[…]

Specifically, the web giant’s Privacy Sandbox APIs, a set of ad delivery and analysis technologies, now function in the latest version of the Chrome browser. Website developers can thus write code that calls those APIs to deliver and measure ads to visitors with compatible browsers.

That is to say, sites can ask Chrome directly what kinds of topics you’re interested in – topics automatically selected by Chrome from your browsing history – so that ads personalized to your activities can be served. This is supposed to be better than being tracked via third-party cookies, support for which is being phased out. There are other aspects to the sandbox that we’ll get to.

While Chrome is the main vehicle for Privacy Sandbox code, Microsoft Edge, based on the open source Chromium project, has also shown signs of supporting the technology. Apple and Mozilla have rejected at least the Topics API for interest-based ads on privacy grounds.

[…]

“The Privacy Sandbox technologies will offer sites and apps alternative ways to show you personalized ads while keeping your personal information more private and minimizing how much data is collected about you.”

These APIs include:

  • Topics: Locally track browsing history to generate ads based on demonstrated user interests without third-party cookies or identifiers that can track across websites.
  • Protected Audience (FLEDGE): Serve ads for remarketing (e.g. you visited a shoe website so we’ll show you a shoe ad elsewhere) while mitigating third-party tracking across websites.
  • Attribution Reporting: Data to link ad clicks or ad views to conversion events (e.g. sales).
  • Private Aggregation: Generate aggregate data reports using data from Protected Audience and cross-site data from Shared Storage.
  • Shared Storage: Allow unlimited, cross-site storage write access with privacy-preserving read access. In other words, you graciously provide local storage via Chrome for ad-related data or anti-abuse code.
  • Fenced Frames: Securely embed content onto a page without sharing cross-site data. Or iframes without the security and privacy risks.

These technologies, Google and industry allies believe, will allow the super-corporation to drop support for third-party cookies in Chrome next year without seeing a drop in targeted advertising revenue.

[…]

“Privacy Sandbox removes the ability of website owners, agencies and marketers to target and measure their campaigns using their own combination of technologies in favor of a Google-provided solution,” James Rosewell, co-founder of MOW, told The Register at the time.

[…]

Controversially, in the US, where lack of coherent privacy rules suit ad companies just fine, the popup merely informs the user that these APIs are now present and active in the browser but requires visiting Chrome’s Settings page to actually manage them – you have to opt-out, if you haven’t already. In the EU, as required by law, the notification is an invitation to opt-in to interest-based ads via Topics.

Source: How Google Chrome’s Privacy Sandbox works and what it means • The Register

Google taken to court in NL for large scale privacy breaches

The Foundation for the Protection of Privacy Interests and the Consumers’ Association are taking the next step in their fight against Google. The tech company is being taken to court today for ‘large-scale privacy violations’.

The proceedings demand, among other things, that Google stop its constant surveillance and sharing of personal data through online advertising auctions and also pay damages to consumers. Since the announcement of this action on May 23, 2023, more than 82,000 Dutch people have already joined the mass claim.

According to the organizations, Google is acting in violation of Dutch and European privacy legislation. The tech giant collects users’ online behavior and location data on an immense scale through its services and products. Without providing enough information or having obtained permission. Google then shares that data, including highly sensitive personal data about health, ethnicity and political preference, for example, with hundreds of parties via its online advertising platform.

Google is constantly monitoring everyone. Even when using third-party cookies – which are invisible – Google continues to collect data through other people’s websites and apps, even when someone is not using its products or services. This enables Google to monitor almost the entire internet behavior of its users.

All these matters have been discussed with Google, to no avail.

The Foundation for the Protection of Privacy Interests represents the interests of users of Google’s products and services living in the Netherlands who have been harmed by privacy violations. The foundation is working together with the Consumers’ Association in the case against Google. Consumers’ Association Claimservice, a partnership between the Consumers’ Association and ConsumersClaim, processes the registrations of affiliated victims.

More than 82,000 consumers have already registered for the Google claim. They demand compensation of 750 euros per participant.

A lawsuit by the American government against Google starts today in the US . Ten weeks have been set aside for this. This mainly revolves around the power of Google’s search engine.

Essentially, Google is accused of entering into exclusive agreements to guarantee the use of its search engine. These are agreements that prevent alternative search engines from being pre-installed, or from Google’s search app being removed.

Source: Google voor de rechter gedaagd wegens ‘grootschalige privacyschendingen’ – Emerce (NL)

Mozilla investigates 25 major car brands and finds privacy is shocking

[…]

The foundation, the Firefox browser maker’s netizen-rights org, assessed the privacy policies and practices of 25 automakers and found all failed its consumer privacy tests and thereby earned its Privacy Not Included (PNI) warning label.

If you care even a little about privacy, stay as far away from Nissan’s cars as you possibly can

In research published Tuesday, the org warned that manufacturers may collect and commercially exploit much more than location history, driving habits, in-car browser histories, and music preferences from today’s internet-connected vehicles. Instead, some makers may handle deeply personal data, such as – depending on the privacy policy – sexual activity, immigration status, race, facial expressions, weight, health, and even genetic information, the Mozilla team found.

Cars may collect at least some of that info about drivers and passengers using sensors, microphones, cameras, phones, and other devices people connect to their network-connected cars, according to Mozilla. And they collect even more info from car apps – such as Sirius XM or Google Maps – plus dealerships, and vehicle telematics.

Some car brands may then share or sell this information to third parties. Mozilla found 21 of the 25 automakers it considered say they may share customer info with service providers, data brokers, and the like, and 19 of the 25 say they can sell personal data.

More than half (56 percent) also say they share customer information with the government or law enforcement in response to a “request.” This isn’t necessarily a court-ordered warrant, and can also be a more informal request.

And some – like Nissan – may also use this private data to develop customer profiles that describe drivers’ “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.”

Yes, you read that correctly. According to Mozilla’s privacy researchers, Nissan says it can infer how smart you are, then sell that assessment to third parties.

[…]

Nissan isn’t the only brand to collect information that seems completely irrelevant to the vehicle itself or the driver’s transportation habits.

Kia mentions sex life,” Caltrider said. “General Motors and Ford both mentioned race and sexual orientation. Hyundai said that they could share data with government and law enforcement based on formal or informal requests. Car companies can collect even more information than reproductive health apps in a lot of ways.”

[…]

the Privacy Not Included team contacted Nissan and all of the other brands listed in the research: that’s Lincoln, Mercedes-Benz, Acura, Buick, GMC, Cadillac, Fiat, Jeep, Chrysler, BMW, Subaru, Dacia, Hyundai, Dodge, Lexus, Chevrolet, Tesla, Ford, Honda, Kia, Audi, Volkswagen, Toyota and Renault.

Only three – Mercedes-Benz, Honda, and Ford – responded, we’re told.

“Mercedes-Benz did answer a few of our questions, which we appreciate,” Caltrider said. “Honda pointed us continually to their public privacy documentation to answer your questions, but they didn’t clarify anything. And Ford said they discussed our request internally and made the decision not to participate.”

This makes Mercedes’ response to The Register a little puzzling. “We are committed to using data responsibly,” a spokesperson told us. “We have not received or reviewed the study you are referring to yet and therefore decline to comment to this specifically.”

A spokesperson for the four Fiat-Chrysler-owned brands (Fiat, Chrysler, Jeep, and Dodge) told us: “We are reviewing accordingly. Data privacy is a key consideration as we continually seek to serve our customers better.”

[…]

The Mozilla Foundation also called out consent as an issue some automakers have placed in a blind spot.

“I call this out in the Subaru review, but it’s not limited to Subaru: it’s the idea that anybody that is a user of the services of a connected car, anybody that’s in a car that uses services is considered a user, and any user is considered to have consented to the privacy policy,” Caltrider said.

Opting out of data collection is another concern.

Tesla, for example, appears to give users the choice between protecting their data or protecting their car. Its privacy policy does allow users to opt out of data collection but, as Mozilla points out, Tesla warns customers: “If you choose to opt out of vehicle data collection (with the exception of in-car Data Sharing preferences), we will not be able to know or notify you of issues applicable to your vehicle in real time. This may result in your vehicle suffering from reduced functionality, serious damage, or inoperability.”

While technically this does give users a choice, it also essentially says if you opt out, “your car might become inoperable and not work,” Caltrider said. “Well, that’s not much of a choice.”

[…]

Source: Mozilla flunks 25 major car brands for data privacy fails • The Register

Australian Government, Of All Places, Says Age Verification Is A Privacy & Security Nightmare

In the past I’ve sometimes described Australia as the land where internet policy is completely upside down. Rather than having a system that protects intermediaries from liability for third party content, Australia went the opposite direction. Rather than recognizing that a search engine merely links to content and isn’t responsible for the content at those links, Australia has said that search engines can be held liable for what they link to. Rather than protect the free expression of people on the internet who criticize the rich and powerful, Australia has extremely problematic defamation laws that result in regular SLAPP suits and suppression of speech. Rather than embrace encryption that protects everyone’s privacy and security, Australia requires companies to break encryption, insisting only criminals use it.

It’s basically been “bad internet policy central,” or the place where good internet policy goes to die.

And, yet, there are some lines that even Australia won’t cross. Specifically, the Australian eSafety commission says that it will not require adult websites to use age verification tools, because it would put the privacy and security of Australians’ data at risk. (For unclear reasons, the Guardian does not provide the underlying documents, so we’re fixing that and providing both the original roadmap and the Australian government’s response

[…]

Of course, in France, the Data Protection authority released a paper similarly noting that age verification was a privacy and security nightmare… and the French government just went right on mandating the use of the technology. In Australia, the eSafety Commission pointed to the French concerns as a reason not to rush into the tech, meaning that Australia took the lessons from French data protection experts more seriously than the French government did.

And, of course, here in the US, the Congressional Research Service similarly found serious problems with age verification technology, but it hasn’t stopped Congress from releasing a whole bunch of “save the children” bills that are built on a foundation of age verification.

[…]

Source: Australian Government, Of All Places, Says Age Verification Is A Privacy & Security Nightmare | Techdirt

Companies are recording your conversations whilst you are on hold with them

Is Achmea or Bol.com customer service putting you on hold? Then everything you say can still be heard by some of their employees. This is evident from research by Radar.

When you call customer service, you often hear: “Please note: this conversation may be recorded for training purposes.” Nothing special. But if you call the insurer Zilveren Kruis, you will also hear: “Note: Even if you are on hold, our quality employees can hear what you are saying.”

Striking, because the Dutch Data Protection Authority states that recording customers ‘on hold’ is not allowed. Companies are allowed to record the conversation, for example to conclude a contract or to improve the service.

Both mortgage provider Woonfonds and insurers Zilveren Kruis, De Friesland and Interpolis confirm that the recording tape continues to run if you are on hold with them, while this violates privacy rules.

Bol.com also continues to eavesdrop on you while you are on hold, the webshop confirms. She also gives the same reason for this: “It is technically not possible to temporarily stop the recording and start it again when the conversation starts again.”KLM, Ziggo, Eneco, Vattenfall, T-Mobile, Nationale Nederlanden, ASR, ING and Rabobank say they don’t answer their customers while they are on hold.

Source: Diverse bedrijven waaronder bol.com nemen gesprekken ‘in de wacht’ op – Emerce

China floats rules for facial recognition technology – they are good and be great if the govt was bound by them too!

China has released draft regulations to govern the country’s facial recognition technology that include prohibitions on its use to analyze race or ethnicity.

According to the the Cyberspace Administration of China(CAC), the purpose is to “regulate the application of face recognition technology, protect the rights and interests of personal information and other personal and property rights, and maintain social order and public safety” as outlined by a smattering of data security, personal information, and network laws.

The draft rules, which are open for comments until September 7, include some vague directives not to use face recognition technology to disrupt social order, endanger national security, or infringe on the rights of individuals and organizations.

The rules also state that facial recognition tech must be used only when there is a specific purpose and sufficient necessity, strict protection measures are taken, and only when non-biometric measures won’t do.

It makes requirements to obtain consent before processing face information, except for cases where it’s not required, which The Reg assumes means for individuals such as prisoners and in instances of national security. Parental or guardian consent is needed for those under the age of 14.

Building managers can’t require its use to enter and exit property – they must provide alternative measures of verifying a personal identity for those who want it.

It also can’t be leaned into for “major personal interests” such as social assistance and real estate disposal. For that, manual verification of personal identity must be used with facial recognition used only as an auxiliary means of verifying personal identity.

And collecting images for internal management should only be done in a reasonably sized area.

In businesses like hotels, banks, airports, art galleries, and more, the tech should not be used to verify personal identity. If the individual chooses to link their identity to the image, they should be informed either verbally or in writing and provide consent.

Collecting images is also not allowed in private spaces like hotel rooms, public bathrooms, and changing rooms.

Furthermore, those using facial surveillance techniques must display reminder signs, and personal images along with identification information must also be kept confidential, and only anonymized data may be saved.

Under the draft regs, those that store face information of more than 10,000 people must register with a local branch of the CAC within 30 working days.

Most interesting, however, is Article 11, which, when translated from Chinese via automated tools, reads:

No organization or individual shall use face recognition technology to analyze personal race, ethnicity, religion, sensitive personal information such as beliefs, health status, social class, etc.

The CAC does not say if the Chinese Communist Party counts as an “organization.”

Human rights groups have credibly asserted that Uyghurs are routinely surveilled using facial recognition technology, in addition to being incarcerated, required to perform forced labor, re-educated to abandon their beliefs and cultural practices, and may even be subjected to sterilization campaigns.

Just last month, physical security monitoring org IPVM reported it came into possession of a contract between China-based Hikvision and Hainan Province’s Chengmai County for $6 million worth of cameras that could detect whether a person was ethnically Uyghur using minority recognition technology.

Hikvision denied the report and said it last provided such functionality in 2018.

Beyond facilitating identification of Uyghurs, it’s clear the cat is out of the bag when it comes to facial recognition technology in China by both government and businesses alike. Local police use it to track down criminals and its use feeds into China’s social credit system.

“‘Sky Net,’ a facial recognition system that can scan China’s population of about 1.4 billion people in a second, is being used in 16 Chinese cities and provinces to help police crackdown on criminals and improve security,” said state-sponsored media in 2018.

Regardless, the CAC said those violating the new draft rules once passed would be held to criminal and civil liability.

Source: China floats rules for facial recognition technology • The Register

Reddit Wins, Doesn’t Have to NARC on Users Who Discussed Torrenting

This weekend, a federal court tossed a subpoena in a case against the internet service provider Grande that would require Reddit to reveal the identities of anonymous users that torrent movies.

The case was originally filed in 2021 by 20 movie producers against Grande Communications in the Western District of Texas federal court. The lawsuit claims that Grande is committing copyright infringement against the producers for allegedly ignoring the torrenting of 45 of their movies that occurred on its networks. As part of the case, the plaintiffs attempted to subpoena Reddit for IP addresses and user data for accounts that openly discussed torrenting on the platform. This weekend, Magistrate Judge Laurel Beeler denied the subpoena—meaning Reddit is off the hook.

“The plaintiffs thus move to compel Reddit to produce the identities of its users who are the subject of the plaintiffs’ subpoena,” Magistrate Judge Beeler wrote in her decision. “The issue is whether that discovery is permissible despite the users’ right to speak anonymously under the First Amendment. The court denies the motion because the plaintiffs have not demonstrated a compelling need for the discovery that outweighs the users’ First Amendment right to anonymous speech.”

Reddit was previously cleared of a similar subpoena in a similar lawsuit by the same judge back in May as reported by ArsTechnica. Reddit was asked to unmask eight users who were active in piracy threads on the platform, but the social media website pulled the same First Amendment defense.

 

Source: Reddit Wins, Doesn’t Have to NARC on Users Who Discussed Torrenting