After the UK, online age verification is landing in the EU

Denmark, Greece, Spain, France, and Italy are the first to test the technical solution unveiled by the European Commission on July 14, 2025.

The announcement came less than two weeks before the UK enforced mandatory age verification checks on July 25. These have so far sparked concerns about the privacy and security of British users, fueling a spike in usage amongst the best VPN apps.

[…]

The introduction of this technical solution is a key step in implementing children’s online safety rules under the Digital Services Act (DSA).

Lawmakers ensure that this solution seeks to set “a new benchmark for privacy protection” in age verification.

That’s because online services will only receive proof that the user is 18+, without any personal details attached.

Further work on the integration of zero-knowledge proofs is also ongoing, with the full implementation of mandatory checks in the EU expected to be enforced in 2026.

[…]

Starting from Friday, July 25, millions of Britons will need to be ready to prove their age before accessing certain websites or content.

Under the Online Safety Act, sites displaying adult-only content must prevent minors from accessing their services via robust age checks.

Social media, dating apps, and gaming platforms are also expected to verify their users’ age before showing them so-called harmful content.

[…]

The vagueness of what constitutes harmful content, as well as the privacy and security risks linked with some of these age verification methods, have attracted criticism among experts, politicians, and privacy-conscious citizens who fear a negative impact on people’s digital rights.

While the EU approach seems better on paper, it remains to be seen how the age verification scheme will ultimately be enforced.

[…]

Source: After the UK, online age verification is landing in the EU | TechRadar

And so comes the EU spying on our browsing habits, telling us what is and isn’t good for us to see. I can make my own mind up, thank you. How annoying that I will be rate limited to the VPN I get.

Echolon Exercise Bikes Lose Features, must phone home to work at all after Firmware Update

[…] It seems like a simple concept that everyone should be able to agree to: if I buy a product from you that does x, y, and z, you don’t get to remove x, y, or z remotely after I’ve made that purchase. How we’ve gotten to a place where companies can simply remove, or paywall, product features without recourse for the customer they essentially bait and switched is beyond me.

But it keeps happening. The most recent example of this is with Echelon exercise bikes. Those bikes previously shipped to paying customers with all kinds of features for ride metrics and connections to third-party apps and services without anything further needed from the user. That all changed recently when a firmware update suddenly forced an internet connection and a subscription to a paid app to make any of that work.

As explained in a Tuesday blog post by Roberto Viola, who develops the “QZ (qdomyos-zwift)” app that connects Echelon machines to third-party fitness platforms, like Peloton, Strava, and Apple HealthKit, the firmware update forces Echelon machines to connect to Echelon’s servers in order to work properly. A user online reported that as a result of updating his machine, it is no longer syncing with apps like QZ, and he is unable to view his machine’s exercise metrics in the Echelon app without an Internet connection.

Affected Echelon machines reportedly only have full functionality, including the ability to share real-time metrics, if a user has the Echelon app active and if the machine is able to reach Echelon’s servers.

Want to know how fast you’re going on the bike you’re sitting upon? That requires an internet connection. Want to get a sense of how you performed on your ride on the bike? That requires an internet connection. And if Echelon were to go out of business? Then your bike just no longer works beyond the basic function of pedaling it.

And the ability to use third-party apps is reportedly just, well, gone.

For some owners of Echelon equipment, QZ, which is currently rated as the No. 9 sports app on Apple’s App Store, has been central to their workouts. QZ connects the equipment to platforms like Zwift, which shows people virtual, scenic worlds while they’re exercising. It has also enabled new features for some machines, like automatic resistance adjustments. Because of this, Viola argued in his blog that QZ has “helped companies grow.”

“A large reason I got the [E]chelon was because of your app and I have put thousands of miles on the bike since 2021,” a Reddit user told the developer on the social media platform on Wednesday.

Instead of happily accepting that someone out there is making its product more attractive and valuable, Echelon is instead going for some combination of overt control and the desire for customer data. Data which will be used, of course, for marketing purposes.

There’s also value in customer data. Getting more customers to exercise with its app means Echelon may gather more data for things like feature development and marketing.

What you won’t hear anywhere, at least that I can find, is any discussion of the ability to return or get refunds for customers who bought these bikes when they did things that they no longer will do after the fact. That’s about as clear a bait and switch type of a scenario as you’re likely to find.

Unfortunately, with the FTC’s Bureau of Consumer Protection being run by just another Federalist Society imp, it’s unlikely that anything material will be done to stop this sort of thing.

Source: Exercise Bike Company Yanks Features Away From Purchased Bikes Via Firmware Update | Techdirt

WhoFi: Unique ‘fingerprint’ based on Wi-Fi interactions allows reidentification of people being observed

Researchers in Italy have developed a way to create a biometric identifier for people based on the way the human body interferes with Wi-Fi signal propagation.

The scientists claim this identifier, a pattern derived from Wi-Fi Channel State Information, can re-identify a person in other locations most of the time when a Wi-Fi signal can be measured. Observers could therefore track a person as they pass through signals sent by different Wi-Fi networks – even if they’re not carrying a phone.

In the past decade or so, scientists have found that Wi-Fi signals can be used for various sensing applications, such as seeing through walls, detecting falls, sensing the presence of humans, and recognizing gestures including sign language.

Following the approval of the IEEE 802.11bf specification in 2020, the Wi-Fi Alliance began promoting Wi-Fi Sensing, positioning Wi-Fi as something more than a data transit mechanism.

The researchers – Danilo Avola, Daniele Pannone, Dario Montagnini, and Emad Emam, from La Sapienza University of Rome – call their approach “WhoFi”, as described in a preprint paper titled, “WhoFi: Deep Person Re-Identification via Wi-Fi Channel Signal Encoding.”

(The authors presumably didn’t bother checking whether the WhoFi name was taken. But an Oklahoma-based provider of online community spaces shares the same name.)

Who are you, really?

Re-identification, the researchers explain, is a common challenge in video surveillance. It’s not always clear when a subject captured on video is the same person recorded at another time and/or place.

Re-identification doesn’t necessarily reveal a person’s identity. Instead, it is just an assertion that the same surveilled subject appears in different settings. In video surveillance, this might be done by matching the subject’s clothes or other distinct features in different recordings. But that’s not always possible.

The Sapienza computer scientists say Wi-Fi signals offer superior surveillance potential compared to cameras because they’re not affected by light conditions, can penetrate walls and other obstacles, and they’re more privacy-preserving than visual images.

“The core insight is that as a Wi-Fi signal propagates through an environment, its waveform is altered by the presence and physical characteristics of objects and people along its path,” the authors state in their paper. “These alterations, captured in the form of Channel State Information (CSI), contain rich biometric information.”

CSI in the context of Wi-Fi devices refers to information about the amplitude and phase of electromagnetic transmissions. These measurements, the researchers say, interact with the human body in a way that results in person-specific distortions. When processed by a deep neural network, the result is a unique data signature.

Researchers proposed a similar technique, dubbed EyeFi, in 2020, and asserted it was accurate about 75 percent of the time.

The Rome-based researchers who proposed WhoFi claim their technique makes accurate matches on the public NTU-Fi dataset up to 95.5 percent of the time when the deep neural network uses the transformer encoding architecture.

“The encouraging results achieved confirm the viability of Wi-Fi signals as a robust and privacy-preserving biometric modality, and position this study as a meaningful step forward in the development of signal-based Re-ID systems,” the authors say. ®

Source: WhoFi: Unique ‘fingerprint’ based on Wi-Fi interactions • The Register

Google to Gemini Users: We’re Going to Look at Your Texts Whether You Like It or Not

[…]As highlighted in a Reddit post, Google recently sent out an email to some Android users informing them that Gemini will now be able to “help you use Phone, Messages, WhatsApp, and Utilities on your phone whether your Gemini Apps Activity is on or off.” That change, according to the email, will take place on July 7. In short, that sounds—at least on the surface—like whether you have opted in or out, Gemini has access to all of those very critical apps on your device.

Google email about Gemini privacy.
© Reddit / Screenshot by Gizmodo

Google continues in the email, which was screenshotted by Android Police, by stating that “if you don’t want to use these features, you can turn them off in Apps settings page,” but doesn’t elaborate on where to find that page or what exactly will be disabled if you avail yourself of that setting option. Notably, when App Activity is enabled, Google stores information on your Gemini usage (inputs and responses, for example) for up to 72 hours, and some of that data may actually be reviewed by a human. That’s all to say that enabling Gemini access to those critical apps by default may be a bridge too far for some who are worried about protecting their privacy or wary of AI in general.

[…]

The worst part is, if we’re not careful, all of that information might end up being collected without our consent, or at least without our knowledge. I don’t know about you, but as much as I want AI to order me a cab, I think keeping my text messages private is a higher priority.

Source: Google to Gemini Users: We’re Going to Look at Your Texts Whether You Like It or Not

Judge Denies Creating ‘Mass Surveillance Program’ Harming All ChatGPT Users after ordering all chats (including “deleted” ones) be kept indefinitely

An anonymous reader quotes a report from Ars Technica: After a court ordered OpenAI to “indefinitely” retain all ChatGPT logs, including deleted chats, of millions of users, two panicked users tried and failed to intervene. The order sought to preserve potential evidence in a copyright infringement lawsuit raised by news organizations. In May, Judge Ona Wang, who drafted the order, rejected the first user’s request (PDF) on behalf of his company simply because the company should have hired a lawyer to draft the filing. But more recently, Wang rejected (PDF) a second claim from another ChatGPT user, and that order went into greater detail, revealing how the judge is considering opposition to the order ahead of oral arguments this week, which were urgently requested by OpenAI.

The second request (PDF) to intervene came from a ChatGPT user named Aidan Hunt, who said that he uses ChatGPT “from time to time,” occasionally sending OpenAI “highly sensitive personal and commercial information in the course of using the service.” In his filing, Hunt alleged that Wang’s preservation order created a “nationwide mass surveillance program” affecting and potentially harming “all ChatGPT users,” who received no warning that their deleted and anonymous chats were suddenly being retained. He warned that the order limiting retention to just ChatGPT outputs carried the same risks as including user inputs, since outputs “inherently reveal, and often explicitly restate, the input questions or topics input.”

Hunt claimed that he only learned that ChatGPT was retaining this information — despite policies specifying they would not — by stumbling upon the news in an online forum. Feeling that his Fourth Amendment and due process rights were being infringed, Hunt sought to influence the court’s decision and proposed a motion to vacate the order that said Wang’s “order effectively requires Defendants to implement a mass surveillance program affecting all ChatGPT users.” […] OpenAI will have a chance to defend panicked users on June 26, when Wang hears oral arguments over the ChatGPT maker’s concerns about the preservation order. In his filing, Hunt explained that among his worst fears is that the order will not be blocked and that chat data will be disclosed to news plaintiffs who may be motivated to publicly disseminate the deleted chats. That could happen if news organizations find evidence of deleted chats they say are likely to contain user attempts to generate full news articles.

Wang suggested that there is no risk at this time since no chat data has yet been disclosed to the news organizations. That could mean that ChatGPT users may have better luck intervening after chat data is shared, should OpenAI’s fight to block the order this week fail. But that’s likely no comfort to users like Hunt, who worry that OpenAI merely retaining the data — even if it’s never shared with news organizations — could cause severe and irreparable harms. Some users appear to be questioning how hard OpenAI will fight. In particular, Hunt is worried that OpenAI may not prioritize defending users’ privacy if other concerns — like “financial costs of the case, desire for a quick resolution, and avoiding reputational damage” — are deemed more important, his filing said.

Source: Judge Denies Creating ‘Mass Surveillance Program’ Harming All ChatGPT Users

NB you would be pretty dense to think that anything you put into an externally hosted GPT would not be kept and used by that company for AI training and other analysis, so it’s not surprising that this data could be (and will be) requisitioned by other corporations and of course governments.

Makers of air fryers and smart speakers told to respect users’ right to privacy in UK

Makers of air fryers, smart speakers, fertility trackers and smart TVs have been told to respect people’s rights to privacy by the UK Information Commissioner’s Office (ICO).

People have reported feeling powerless to control how data is gathered, used and shared in their own homes and on their bodies.

After reports of air fryers designed to listen in to their surroundings and public concerns that digitised devices collect an excessive amount of personal information, the data protection regulator has issued its first guidance on how people’s personal information should be handled.

Is your air fryer spying on you? Concerns over ‘excessive’ surveillance in smart devices

It is demanding that manufacturers and data handlers ensure data security, are transparent with consumers and ensure the regular deletion of collected information.

Stephen Almond, the executive director for regulatory risk at the ICO, said: “Smart products know a lot about us: who we live with, what music we like, what medication we are taking and much more.

“They are designed to make our lives easier, but that doesn’t mean they should be collecting an excessive amount of information … we shouldn’t have to choose between enjoying the benefits of smart products and our own privacy.

“We all rightly have a greater expectation of privacy in our own homes, so we must be able to trust smart products are respecting our privacy, using our personal information responsibly and only in ways we would expect.”

The new guidance cites a wide range of devices that are broadly known as part of the “internet of things”, which collect data that needs to be carefully handled. These include smart fertility trackers that record the dates of their users’ periods and body temperature, send it back to the manufacturer’s servers and make an inference about fertile days based on this information.

Smart speakers that listen in not only to their owner but also to other members of their family and visitors to their home should be designed so users can configure product settings to minimise the personal information they collect.

[…]

Source: Makers of air fryers and smart speakers told to respect users’ right to privacy | Technology | The Guardian

Wouldn’t it be nice if they benefited from the same privacy laws as exist in the EU?

Pornhub Back Online in France After Court Ruling About Age Verification

Many porn sites, including Pornhub, YouPorn, and RedTube, all went dark earlier this month in France to protest a new age verification law that would have required the websites to collect ID from users. But those sites went back online Friday after a new ruling from a French court suspended enforcement of the law until it can be determined whether it conflicts with existing European Union rules, according to France24.

Aylo, the company that owns Pornhub, has previously said that requiring age verification “creates an unacceptable security risk” and warned that setting up that kind of process makes people vulnerable to hacks and leaks of sensitive information. The French law would’ve required Aylo to verify user ages with a government-issued ID or a credit card.

[…]

Age verification laws for porn websites has been a controversial issue globally, with the U.S. seeing a dramatic uptick in states passing such laws in recent years. Nineteen states now have laws that require age verification for porn sites, meaning that anyone who wants to access Pornhub in places like Florida and Texas need to use a VPN.

Australia recently passed a law banning social media use for anyone under the age of 16, regardless of explicit content, which is currently making its way through the expected challenges. The law had a 12-month buffer built in to allow the country’s internet safety regulator to figure out how to implement it. Tech giants like Meta and TikTok were dealt a blow on Friday after the commission issued a report stating that age verification “can be private, robust and effective,” though trials are ongoing about how to best make the law work, according to ABC News in Australia.

Source: Pornhub Back Online in France After Court Ruling About Age Verification

Nope. Age verification is easily broken and is a huge security / privacy risk.

Nintendo will record your Gamechat audio and video

Last month, ahead of the launch of the Switch 2 and its GameChat communication features, Nintendo updated its privacy policy to note that the company “may also monitor and record your video and audio interactions with other users.” Now that the Switch 2 has officially launched, we have a clearer understanding of how the console handles audio and video recorded during GameChat sessions, as well as when that footage may be sent to Nintendo or shared with partners, including law enforcement. Before using GameChat on Switch 2 for the first time, you must consent to a set of GameChat Terms displayed on the system itself. These terms warn that chat content is “recorded and stored temporarily” both on your system and the system of those you chat with. But those stored recordings are only shared with Nintendo if a user reports a violation of Nintendo’s Community Guidelines, the company writes.

That reporting feature lets a user “review a recording of the last three minutes of the latest three GameChat sessions” to highlight a particular section for review, suggesting that chat sessions are not being captured and stored in full. The terms also lay out that “these recordings are available only if the report is submitted within 24 hours,” suggesting that recordings are deleted from local storage after a full day. If a report is submitted to Nintendo, the company warns that it “may disclose certain information to third parties, such as authorities, courts, lawyers, or subcontractors reviewing the reported chats.” If you don’t consent to the potential for such recording and sharing, you’re prevented from using GameChat altogether.

Nintendo is extremely clear that the purpose of its recording and review system is “to protect GameChat users, especially minors” and “to support our ability to uphold our Community Guidelines.” This kind of human moderator review of chats is pretty common in the gaming world and can even apply to voice recordings made by various smart home assistants. […] Overall, the time-limited, local-unless-reported recordings Nintendo makes here seem like a minimal intrusion on the average GameChat user’s privacy. Still, if you’re paranoid about Nintendo potentially seeing and hearing what’s going on in your living room, it’s good to at least be aware of it.

Source: Nintendo Warns Switch 2 GameChat Users: ‘Your Chat Is Recorded’ (arstechnica.com)

The US Is Storing Migrant Children’s DNA in a Criminal Database

The United States government has collected DNA samples from upwards of 133,000 migrant children and teenagers—including at least one 4-year-old—and uploaded their genetic data into a national criminal database used by local, state, and federal law enforcement, according to documents reviewed by WIRED. The records, quietly released by the US Customs and Border Protection earlier this year, offer the most detailed look to date at the scale of CBP’s controversial DNA collection program. They reveal for the first time just how deeply the government’s biometric surveillance reaches into the lives of migrant children, some of whom may still be learning to read or tie their shoes—yet whose DNA is now stored in a system originally built for convicted sex offenders and violent criminals.

[…]

Spanning from October 2020 through the end of 2024, the records show that CBP swabbed the cheeks of between 829,000 and 2.8 million people, with experts estimating that the true figure, excluding duplicates, is likely well over 1.5 million. That number includes as many as 133,539 children and teenagers. These figures mark a sweeping expansion of biometric surveillance—one that explicitly targets migrant populations, including children.

[…]

Under current rules, DNA is generally collected from anyone who is also fingerprinted. According to DHS policy, 14 is the minimum age at which fingerprinting becomes routine.

[…]

“Taking DNA from a 4-year old and adding it into CODIS flies in the face of any immigration purpose,” she says, adding, “That’s not immigration enforcement. That’s genetic surveillance.”

Multiple studies show no link between immigration and increased crime.

In 2024, Glaberson coauthored a report called “Raiding the Genome” that was the first to try to quantify DHS’s 2020 expansion of DNA collection. It found that if DHS continues to collect DNA at the rate the agency itself projects, one-third of the DNA profiles in CODIS by 2034 will have been taken by DHS, and seemingly without any real due process—the protections that are supposed to be in place before law enforcement compels a person to hand over their most sensitive information.

Regeneron to Acquire all 23andMe genetic data for $256m

23andMe Holding Co. (“23andMe” or the “Company”) (OTC: MEHCQ), a leading human genetics and biotechnology company, today announced that it has entered into a definitive agreement for the sale of 23andMe to Regeneron Pharmaceuticals, Inc. (“Regeneron”) (NASDAQ: REGN), a leading U.S.-based, NASDAQ-listed biotechnology company that invents, develops and commercializes life-transforming medicines for people with serious diseases. The agreement includes Regeneron’s commitment to comply with the Company’s privacy policies and applicable law, process all customer personal data in accordance with the consents, privacy policies and statements, terms of service, and notices currently in effect and have security controls in place designed to protect such data.

[…]

Under the terms of the agreement, Regeneron will acquire substantially all of the assets of the Company, including the Personal Genome Service (PGS), Total Health and Research Services business lines, for a purchase price of $256 million. The agreement does not include the purchase of the Company’s Lemonaid Health subsidiary, which the Company plans to wind down in an orderly manner, subject to and in accordance with the agreement.

[…]

 

Source: Regeneron, A Leading U.S. Biotechnology Company, to Acquire

New Orleans police secretly used facial recognition on over 200 live camera feeds

New Orleans’ police force secretly used constant facial recognition to seek out suspects for two years. An investigation by The Washington Post discovered that the city’s police department was using facial recognition technology on a privately owned camera network to continually look for suspects. This application seems to violate a city ordinance passed in 2022 that required facial recognition only be used by the NOLA police to search for specific suspects of violent crimes and then to provide details about the scans’ use to the city council. However, WaPo found that officers did not reveal their reliance on the technology in the paperwork for several arrests where facial recognition was used, and none of those cases were included in mandatory city council reports.

“This is the facial recognition technology nightmare scenario that we have been worried about,” said Nathan Freed Wessler, an ACLU deputy director. “This is the government giving itself the power to track anyone — for that matter, everyone — as we go about our lives walking around in public.” Wessler added that the is the first known case in a major US city where police used AI-powered automated facial recognition to identify people in live camera feeds for the purpose of making immediate arrests.

Police use and misuse of surveillance technology has been thoroughly documented over the years. Although several US cities and states have placed restrictions on how law enforcement can use facial recognition, those limits won’t do anything to protect privacy if they’re routinely ignored by officers.

Read the full story on the New Orleans PD’s surveillance program at The Washington Post.

Source: New Orleans police secretly used facial recognition on over 200 live camera feeds

FBI Director Kash Patel Abruptly Closes Internal Watchdog Office Overseeing Surveillance Compliance

If there’s one thing the Federal Bureau of Investigation does well, it’s mass surveillance. Several years ago, then attorney general William Barr established an internal office to curb the FBI’s abuse of one controversial surveillance law. But recently, the FBI’s long-time hater (and, ironically, current director) Kash Patel shut down the watchdog group with no explanation.

On Tuesday, the New York Times reported that Patel suddenly closed the Office of Internal Auditing that Barr created in 2020. The office’s leader, Cindy Hall, abruptly retired. People familiar with the matter told the outlet that the closure of the aforementioned watchdog group alongside the Office of Integrity and Compliance are part of internal reorganization. Sources also reportedly said that Hall was trying to expand the office’s work, but her attempts to onboard new employees were stopped by the Trump administration’s hiring freezes.

The Office of Internal Auditing was a response to controversy surrounding the FBI’s use of Section 702 of the Foreign Intelligence Surveillance Act. The 2008 law primarily addresses surveillance of non-Americans abroad. However, Jeramie Scott, senior counselor at the Electronic Privacy Information Center, told Gizmodo via email that the FBI “has repeatedly abused its ability to search Americans’ communications ‘incidentally’ collected under Section 702” to conduct warrantless spying.

Patel has not released any official comment regarding his decision to close the office. But Elizabeth Goitein, senior director at the Brennan Center for Justice, told Gizmodo via email, “It is hard to square this move with Mr. Patel’s own stated concerns about the FBI’s use of Section 702.”

Last year, Congress reauthorized Section 702 despite mounting concerns over its misuses. Although Congress introduced some reforms, the updated legislation actually expanded the government’s surveillance capabilities. At the time, Patel slammed the law’s passage, stating that former FBI director Christopher Wray, who Patel once tried to sue, “was caught last year illegally using 702 collection methods against Americans 274,000 times.” (Per the New York Times, Patel is likely referencing a declassified 2023 opinion by the FISA court that used the Office of Internal Auditing’s findings to determine the FBI made 278,000 bad queries over several years.)

According to Goitein, the office has “played a key role in exposing FBI abuses of Section 702, including warrantless searches for the communication of members of Congress, judges, and protesters.” And ironically, Patel inadvertently drove its creation after attacking the FBI’s FISA applications to wiretap a former Trump campaign advisor in 2018 while investigating potential Russian election interference. Trump and his supporters used Patel’s attacks to push their own narrative dismissing any concerns. Last year, former representative Devin Nunes, who is now CEO of Truth Social, said Patel was “instrumental” to uncovering the “hoax and finding evidence of government malfeasance.”

Although Patel mostly peddled conspiracies, the Justice Department conducted a probe into the FBI’s investigation that raised concerns over “basic and fundamental errors” it committed. In response, Barr created the Office of Internal Auditing, stating, “What happened to the Trump presidential campaign and his subsequent Administration after the President was duly elected by the American people must never happen again.”

But since taking office, Patel has changed his tune about FISA. During his confirmation hearing, Patel referred to Section 702 as a “critical tool” and said, “I’m proud of the reforms that have been implemented and I’m proud to work with Congress moving forward to implement more.” However, reforms don’t mean much by themselves. As Goitein noted, “Without a separate office dedicated to surveillance compliance, [the FBI’s] abuses could go unreported and unchecked.”

[…]

Source: FBI Director Kash Patel Abruptly Closes Internal Watchdog Office Overseeing Surveillance Compliance

Russia to enforce location tracking app on all foreigners in Moscow

The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region.

The new proposal was announced by the chairman of the State Duma, Vyacheslav Volodin, who presented it as a measure to tackle migrant crimes.

“The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area,” stated Volodin.

Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information:

  • Residence location
  • Fingerprint
  • Face photograph
  • Real-time geo-location monitoring

“If migrants change their actual place of residence, they will be required to inform the Ministry of Internal Affairs (MVD) within three working days,” the high-ranking politician explained.

The measures will not apply to diplomats of foreign countries or citizens of Belarus.

Foreigners attempting to avoid their obligation in relation to the new law will be added to a registry of monitored individuals and deported from Russia.

Russian internet freedom observatory Roskomsvoboda’s reactions to this proposal reflect skepticism and concern.

Lawyer Anna Minushkina noted that the proposal violates Articles 23 and 24 of the Russian Constitution, guaranteeing the right to privacy.

President of the Uzbek Community in Moscow, Viktor Teplyankov, characterized the initiative as “ill-conceived and difficult to implement,” expressing doubts about its feasibility.

Finally, PSP Foundation’s Andrey Yakimov warned that such aggressive measures are bound to deter potential labor migrants, creating a different problem in the country.

The proposal hasn’t reached its final form yet, and specifics like what happens in the case of device theft/loss or similar technical or practical obstacles are to be addressed in the upcoming period during meetings between the Ministry and regional authorities.

The mass-surveillance experiment will run until September 2029, and if deemed successful, the mechanism will extend to cover more parts of the country.

Source: Russia to enforce location tracking app on all foreigners in Moscow

Google found not compliant with AVG when registering new accounts – sends the data to 70 services without user knowledge

According to a ruling by the Berlin Regional Court, Google must disclose to its users which of its more than 70 services process their data when they register for an account. The civil chamber thus upheld a lawsuit filed by the German Association of Consumer Organizations (vzbv). The consumer advocates had complained that neither the “express personalization” nor the alternative “manual personalization” complied with the legal requirements of the European General Data Protection Regulation (GDPR).
The ruling against Google Ireland Ltd. was handed down on March 25, 2025, but was only published on Friday (case number 15 O 472/22). The decision is not yet legally binding because the internet company has appealed the ruling. Google stated that it disagrees with the Regional Court’s decision.
What does Google process data for?
The consumer advocates argued that consumers must know what Google processes their data for when registering. Users must be able to freely decide how their data is processed. The judges at the Berlin Regional Court confirmed this legal opinion. The ruling states: “In this case, transparency is lacking simply because the defendant does not provide information about the individual Google services, Google apps, Google websites, or Google partners for which the data is to be used.” For this reason, the scope of consent is completely unknown to the user.
Google: Account creation has changed
Google stated that the ruling concerned an old account creation process that had since been changed. “What hasn’t changed is our commitment to enabling our users to use Google on their terms, with clear choices and control options based on extensive research, testing, and guidelines from European data protection authorities,” it stated. In the proceedings, Google argued that listing all services would result in excessively long text and harm transparency. This argument was rejected by the court. In the court’s view, information about the scope of consent is among the minimum details required by law. The regional court was particularly concerned that with “Express Personalization,” users only had the option of consenting to all data usage or canceling the process. A differentiated refusal was not possible. Even with “Manual Personalization,” consumers could not refuse the use of the German location.

Source: Landgericht Berlin: Google-Accounterstellung verletzte DSGVO | heise online

Google will pay Texas $1.4B to settle claims the company collected users’ data without permission

[…] “For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won.”

The agreement settles several claims Texas made against the search giant in 2022 related to geolocation, incognito searches and biometric data. The state argued Google was “unlawfully tracking and collecting users’ private data.”

Paxton claimed, for example, that Google collected millions of biometric identifiers, including voiceprints and records of face geometry, through such products and services as Google Photos and Google Assistant.

Google spokesperson José Castañeda said the agreement settles an array of “old claims,” some of which relate to product policies the company has already changed.

[…]

Texas previously reached two other key settlements with Google within the last two years, including one in December 2023 in which the company agreed to pay $700 million and make several other concessions to settle allegations that it had been stifling competition against its Android app store.

Meta has also agreed to a $1.4 billion settlement with Texas in a privacy lawsuit over allegations that the tech giant used users’ biometric data without their permission.

Source: Google will pay Texas $1.4B to settle claims the company collected users’ data without permission | AP News

US senator introduces bill calling for location-tracking on AI chips to limit China access

A U.S. senator introduced a bill on Friday that would direct the Commerce Department to require location verification mechanisms for export-controlled AI chips, in an effort to curb China’s access to advanced semiconductor technology.
Called the “Chip Security Act,” the bill calls for AI chips under export regulations, and products containing those chips, to be fitted with location-tracking systems to help detect diversion, smuggling or other unauthorized use of the product.
“With these enhanced security measures, we can continue to expand access to U.S. technology without compromising our national security,” Republican Senator Tom Cotton of Arkansas said.
The bill also calls for companies exporting the AI chips to report to the Bureau of Industry and Security if their products have been diverted away from their intended location or subject to tampering attempts.
The move comes days after U.S. President Donald Trump said he would rescind and modify a Biden-era rule that curbed the export of sophisticated AI chips with the goal of protecting U.S. leadership in AI and blocking China’s access.
U.S. Representative Bill Foster, a Democrat from Illinois, also plans to introduce a bill on similar lines in the coming weeks, Reuters reported on Monday.
Restricting China’s access to AI technology that could enhance its military capabilities has been a key focus for U.S. lawmakers and reports of widespread smuggling of Nvidia’s (NVDA.O)

Source: US senator introduces bill calling for location-tracking on AI chips to limit China access | Reuters

Of course it adds another layer of the US government spying on you if you want to buy a graphics card too. I’m not sure how anyone being able to track all your PCs does not compromise national security.

Neurotech companies are selling your brain data, senators warn

Three Democratic senators are sounding the alarm over brain-computer interface (BCI) technologies’ ability to collect — and potentially sell — our neural data. In a letter to the Federal Trade Commission (FTC), Sens. Chuck Schumer (D-NY), Maria Cantwell (D-IN), and Ed Markey (D-MA) called for an investigation into neurotechnology companies’ handling of user data, and for tighter regulations on their data-sharing policies.

“Unlike other personal data, neural data — captured directly from the human brain — can reveal mental health conditions, emotional states, and cognitive patterns, even when anonymized,” the letter reads. “This information is not only deeply personal; it is also strategically sensitive.”

While the concept of neural technologies may conjure up images of brain implants like Elon Musk’s Neuralink, there are far less invasive — and less regulated — neurotech products on the market, including headsets that help people meditate, purportedly trigger lucid dreaming, and promise to help users with online dating by helping them swipe through apps “based on your instinctive reaction.” These consumer products gobble up insights about users’ neurological data — and since they aren’t categorized as medical devices, the companies behind them aren’t barred from sharing that data with third parties.

“Neural data is the most private, personal, and powerful information we have—and no company should be allowed to harvest it without transparency, ironclad consent, and strict guardrails. Yet companies are collecting it with vague policies and zero transparency,” Schumer told The Verge via email.

The letter cites a 2024 report by the Neurorights Foundation, which found that most neurotech companies not only have few safeguards on user data but also have the ability to share sensitive information with third parties. The report looked at the data policies of 30 consumer-facing BCI companies and found that all but one “appear to have access to” users’ neural data, “and provide no meaningful limitations to this access.” The Neurorights Foundation only surveyed companies whose products are available to consumers without the help of a medical professional; implants like those made by Neuralink weren’t among them.

The companies surveyed by the Neurorights Foundation make it difficult for users to opt out of having their neurological data shared with third parties. Just over half the companies mentioned in the report explicitly let consumers revoke consent for data processing, and only 14 of the 30 give users the ability to delete their data. In some instances, user rights aren’t universal — for example, some companies only let users in the European Union delete their data but don’t grant the same rights to users elsewhere in the world.

To safeguard against potential abuses, the senators are calling on the FTC to:

  • investigate whether neurotech companies are engaging in unfair or deceptive practices that violate the FTC Act
  • compel companies to report on data handling, commercial practices, and third-party access
  • clarify how existing privacy standards apply to neural data
  • enforce the Children’s Online Privacy Protection Act as it relates to BCIs
  • begin a rulemaking process to establish safeguards for neural data, and setting limits on secondary uses like AI training and behavioral profiling
  • and ensure that both invasive and noninvasive neurotechnologies are subject to baseline disclosure and transparency standards, even when the data is anonymized

Though the senators’ letter calls out Neuralink by name, Musk’s brain implant tech is already subject to more regulations than other BCI technologies. Since Neuralink’s brain implant is considered a “medical” technology, it’s required to comply with the Health Insurance Portability and Accountability Act (HIPAA), which safeguards people’s medical data.

Stephen Damianos, the executive director of the Neurorights Foundation, said that HIPAA may not have entirely caught up to existing neurotechnologies, especially with regards to “informed consent” requirements.

“There are long-established and validated models for consent from the medical world, but I think there’s work to be done around understanding the extent to which informed consent is sufficient when it comes to neurotechnology,” Damianos told The Verge. “The analogy I like to give is, if you were going through my apartment, I would know what you would and wouldn’t find in my apartment, because I have a sense of what exactly is in there. But brain scans are overbroad, meaning they collect more data than what is required for the purpose of operating a device. It’s extremely hard — if not impossible — to communicate to a consumer or a patient exactly what can today and in the future be decoded from their neural data.”

Data collection becomes even trickier for “wellness” neurotechnology products, which don’t have to comply with HIPAA, even when they advertise themselves as helping with mental health conditions like depression and anxiety.

Damianos said there’s a “very hazy gray area” between medical devices and wellness devices.

“There’s this increasingly growing class of devices that are marketed for health and wellness as distinct from medical applications, but there can be a lot of overlap between those applications,” Damianos said. The dividing line is often whether a medical intermediary is required to help someone obtain a product, or whether they can “just go online, put in your credit card, and have it show up in a box a few days later.”

There are very few regulations on neurotechnologies advertised as being for “wellness.” In April 2024, Colorado passed the first-ever legislation protecting consumers’ neural data. The state updated its existing Consumer Protection Act, which protects users’ “sensitive data.” Under the updated legislation, “sensitive data” now includes “biological data” like biological, genetic, biochemical, physiological, and neural information. And in September, California amended its Consumer Privacy Act to protect neural data.

“We believe in the transformative potential of these technologies, and I think sometimes there’s a lot of doom and gloom about them,” Damianos told The Verge. “We want to get this moment right. We think it’s a really profound moment that has the potential to reshape what it means to be human. Enormous risks come from that, but we also believe in leveraging the potential to improve people’s lives.”

Source: Neurotech companies are selling your brain data, senators warn | The Verge

Employee monitoring app exposes 21M work screens​ to internet

A surveillance tool meant to keep tabs on employees is leaking millions of real-time screenshots onto the open web.

Your boss watching your screen isn’t the end of the story. Everyone else might be watching, too. Researchers at Cybernews have uncovered a major privacy breach involving WorkComposer, a workplace surveillance app used by over 200,000 people across countless companies.

The app, designed to track productivity by logging activity and snapping regular screenshots of employees’ screens, left over 21 million images exposed in an unsecured Amazon S3 bucket, broadcasting how workers go about their day frame by frame.

[…]

WorkComposer is one of many time-tracking tools that have crept into modern work culture. Marketed as a way to keep teams “accountable,” the software logs keystrokes, tracks how long you spend on each app, and snaps desktop screenshots every few minutes.

The leak shows just how dangerous this setup becomes when basic security hygiene is ignored. A leak of this magnitude turns everyday work activity into a goldmine for cybercriminals.

[…]

The leak’s real-time nature only amplifies the danger, as threat actors could monitor unfolding business operations as they happen, giving them access to otherwise locked-down environments.

[…]

Source: Employee monitoring app exposes 21M work screens​ | Cybernews

Can you imagine the time lost by employers going over all this data?

Microsoft Windows new built in spying tool Microsoft Recall is a really great idea too. Not.

A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information

[…]Yale New Haven Health (YNHHS), a massive nonprofit healthcare network in Connecticut. Hackers stole the data of more than 5.5 million individuals during an attack in March 2025.

[…]

According to a public notice on the YNHHS website, the organization discovered “unusual activity” on its system on March 8, 2025, which was later identified as unauthorized third-party access that allowed bad actors to copy certain patient data. While the information stolen varies by individual, it may include the following:

  • Name
  • Date of birth
  • Address
  • Phone number
  • Email address
  • Race
  • Ethnicity
  • Social Security number
  • Patient type
  • Medical record number

YNHHS says the breach did not include access to medical records, treatment information, or financial data (such as account and payment information).

[…]

Source: A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information | Lifehacker

Wait – race and ethnicity?!

Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads

CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads.

“That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you,” Srinivas said. “Because some of the prompts that people do in these AIs is purely work-related. It’s not like that’s personal.”

And work-related queries won’t help the AI company build an accurate-enough dossier.

“On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you,” he explained.

Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them.

“We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there,” he said.

The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said.

He’s not wrong, of course. Quietly following users around the internet helped Google become the roughly $2 trillion market cap company it is today.

[…]

Meta’s ad tracking technology, Pixels, which is embedded on websites across the internet, is how Meta gathers data, even on people that don’t have Facebook or Instagram accounts. Even Apple, which has marketed itself as a privacy protector, can’t resist tracking users’ locations to sell advertising in some of its apps by default.

On the other hand, this kind of thing has led people across the political spectrum in the U.S. and in Europe to distrust big tech.

The irony of Srinivas openly explaining his browser-tracking ad-selling ambitions this week also can’t be overstated.

Google is currently in court fighting the U.S. Department of Justice, which has alleged Google behaved in monopolistic ways to dominate search and online advertising. The DOJ wants the judge to order Google to divest Chrome.

Both OpenAI and Perplexity — not surprisingly, given Srinivas’ reasons — said they would buy the Chrome browser business if Google was forced to sell.

Source: Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads | TechCrunch

Yup, so even if Mozilla is making Firefox more invasive, it’s still better than these guys.

Blue Shield of California Exposed the Data of 4.7 Million People to Google for targeted advertising

Blue Shield of California shared the protected health information of 4.7 million individuals with Google over a nearly three-year period, a data breach that impacts the majority of its nearly 6 million members, according to reporting from Bleeping Computer.

This isn’t the only large data breach to affect a healthcare organization the last year alone. Community Health Center records were hacked in October 2024, compromising more than a million individuals’ data, along with an attack on lab testing company Lab Services Cooperative, which affected records of 1.6 million Planned Parenthood patients. UnitedHealth Group suffered a breach in February 2024, resulting in the leak of more than 100 million people’s data.

What happened with Blue Shield of California?

According to an April 9 notice posted on Blue Shield of California’s website, the company allowed certain data, including protected health information, to be shared with Google Ads through Google Analytics, which may have allowed Google to serve targeted ads back to members. While not discovered until Feb. 11, 2025, the leak occurred for several years, from April 2021 to January 2024, when the connection between Google Analytics and Google Ads was severed on Blue Shield websites.

The following Blue Shield member information may have been compromised:

  • Insurance plan name, type, and group number
  • City and zip code
  • Gender
  • Family size
  • Blue Shield assigned identifiers for online accounts
  • Medical claim service date and provider
  • Patient name
  • Patient financial responsibility
  • “Find a Doctor” search criteria and results

According to the notice, no additional personal data—Social Security numbers, driver’s license numbers, and banking and credit card information—were disclosed. Blue Shield also states that no bad actor was involved, nor have they confirmed that the information has been used maliciously.

[…]

Source: Blue Shield of California Exposed the Data of 4.7 Million People to Google | Lifehacker

Discord Wants Your Face: Begins Testing Facial Scans for Age Verification

Discord has begun requiring some users in the United Kingdom and Australia to verify their age through a facial scan before being permitted to access sensitive content. The chat app’s new process has been described as an “experiment,” and comes in response to laws passed in those countries that place guardrails on youth access to online platforms. Discord has also been the target of concerns that it does not sufficiently protect minors from sexual content.

Users may be asked to verify their age when encountering content that has been flagged by Discord’s systems as being sensitive in nature, or when they change their settings to enable access to sensitive content. The app will ask users to scan their face through a computer or smartphone webcam; alternatively, they can scan a driver’s license or other form of ID.

[…]

Source: Discord Begins Testing Facial Scans for Age Verification

Age verification is impossible to do correctly, incredibly privacy invasive and a really hacker tempting target. The UK and Australia and every other country considering age verification are seriously endangering their citizens.

Fortunately you can always hold up a picture from a magazine in front of the webcam.

Your TV is watching you better: LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions

LG TVs will soon leverage an AI model built for showing advertisements that more closely align with viewers’ personal beliefs and emotions. The company plans to incorporate a partner company’s AI tech into its TV software in order to interpret psychological factors impacting a viewer, such as personal interests, personality traits, and lifestyle choices. The aim is to show LG webOS users ads that will emotionally impact them.

The upcoming advertising approach comes via a multi-year licensing deal with Zenapse, a company describing itself as a software-as-a-service marketing platform that can drive advertiser sales “with AI-powered emotional intelligence.” LG will use Zenapse’s technology to divide webOS users into hyper-specific market segments that are supposed to be more informative to advertisers. LG Ad Solutions, LG’s advertising business, announced the partnership on Tuesday.

The technology will be used to inform ads shown on LG smart TVs’ homescreens, free ad-supported TV (FAST) channels, and elsewhere throughout webOS, per StreamTV Insider. LG will also use Zenapse’s tech to “expand new software development and go-to-market products,” it said. LG didn’t specify the duration of its licensing deal with Zenapse.

[…]

With all this information, ZenVision will group LG TV viewers into highly specified market segments, such as “goal-driven achievers,” “social connectors,” or “emotionally engaged planners,” an LG spokesperson told StreamTV Insider. Zenapse’s website for ZenVision points to other potential market segments, including “digital adopters,” “wellness seekers,” “positive impact & environment,” and “money matters.”

Companies paying to advertise on LG TVs can then target viewers based on the ZenVision-specified market segments and deliver an “emotionally intelligent ad,” as Zenapse’s website puts it.

This type of targeted advertising aims to bring advertisers more in-depth information about TV viewers than demographic data or even contextual advertising (which shows ads based on what the viewer is watching) via psychographic data. Demographic data gives advertisers viewer information, like location, age, gender, ethnicity, marital status, and income. Psychographic data is supposed to go deeper and allow advertisers to target people based on so-called psychological factors, like personal beliefs, values, and attitudes. As Salesforce explains, “psychographic segmentation delves deeper into their psyche” than relying on demographic data.

[…]

With their ability to track TV viewers’ behavior, including what they watch and search for on their TVs, smart TVs are a growing obsession for advertisers. As LG’s announcement pointed out, CTVs represent “one of the fastest-growing ad segments in the US, expected to reach over $40 billion by 2027, up from $24.6 billion in 2023.”

However, as advertisers’ interest in appealing to streamers grows, so do their efforts to track and understand viewers for more targeted advertising. Both efforts could end up pushing the limits of user comfort and privacy.

[…]

 

Source: LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions – Ars Technica

An LG TV is not exactly a cheap thing. I am paying for the whole product, not for a service. I bought a TV, not a marketing department.

Apple to Spy on User Emails and other Data on Devices to Bolster AI Technology

Apple Inc. will begin analyzing data on customers’ devices in a bid to improve its artificial intelligence platform, a move designed to safeguard user information while still helping it catch up with AI rivals.

Today, Apple typically trains AI models using synthetic data — information that’s meant to mimic real-world inputs without any personal details. But that synthetic information isn’t always representative of actual customer data, making it harder for its AI systems to work properly.

The new approach will address that problem while ensuring that user data remains on customers’ devices and isn’t directly used to train AI models. The idea is to help Apple catch up with competitors such as OpenAI and Alphabet Inc., which have fewer privacy restrictions.

The technology works like this: It takes the synthetic data that Apple has created and compares it to a recent sample of user emails within the iPhone, iPad and Mac email app. By using actual emails to check the fake inputs, Apple can then determine which items within its synthetic dataset are most in line with real-world messages.

These insights will help the company improve text-related features in its Apple Intelligence platform, such as summaries in notifications, the ability to synthesize thoughts in its Writing Tools, and recaps of user messages.

[…]

The company will roll out the new system in an upcoming beta version of iOS and iPadOS 18.5 and macOS 15.5. A second beta test of those upcoming releases was provided to developers earlier on Monday.

[…]

Already, the company has relied on a technology called differential privacy to help improve its Genmoji feature, which lets users create a custom emoji. It uses that system to “identify popular prompts and prompt patterns, while providing a mathematical guarantee that unique or rare prompts aren’t discovered,” the company said in the blog post.

The idea is to track how the model responds in situations where multiple users have made the same request — say, asking for a dinosaur carrying a briefcase — and improving the results in those cases.

The features are only for users who are opted in to device analytics and product improvement capabilities. Those options are managed in the Privacy and Security tab within the Settings app on the company’s devices.

[…]

Source: Apple to Analyze User Data on Devices to Bolster AI Technology

UK Effort to Keep Apple Encryption Fight Secret Is Blocked

A court has blocked a British government attempt to keep secret a legal case over its demand to access Apple Inc. user data in a victory for privacy advocates.

The UK Investigatory Powers Tribunal, a special court that handles cases related to government surveillance, said the authorities’ efforts were a “fundamental interference with the principle of open justice” in a ruling issued on Monday.

The development comes after it emerged in January that the British government had served Apple with a demand to circumvent encryption that the company uses to secure user data stored in its cloud services.

Apple challenged the request, while taking the unprecedented step of removing its advanced data protection feature for its British users. The government had sought to keep details about the demand — and Apple’s challenge of it — from being publicly disclosed.

[…]

Source: UK Effort to Keep Apple Encryption Fight Secret Is Blocked