Pornhub Back Online in France After Court Ruling About Age Verification

Many porn sites, including Pornhub, YouPorn, and RedTube, all went dark earlier this month in France to protest a new age verification law that would have required the websites to collect ID from users. But those sites went back online Friday after a new ruling from a French court suspended enforcement of the law until it can be determined whether it conflicts with existing European Union rules, according to France24.

Aylo, the company that owns Pornhub, has previously said that requiring age verification “creates an unacceptable security risk” and warned that setting up that kind of process makes people vulnerable to hacks and leaks of sensitive information. The French law would’ve required Aylo to verify user ages with a government-issued ID or a credit card.

[…]

Age verification laws for porn websites has been a controversial issue globally, with the U.S. seeing a dramatic uptick in states passing such laws in recent years. Nineteen states now have laws that require age verification for porn sites, meaning that anyone who wants to access Pornhub in places like Florida and Texas need to use a VPN.

Australia recently passed a law banning social media use for anyone under the age of 16, regardless of explicit content, which is currently making its way through the expected challenges. The law had a 12-month buffer built in to allow the country’s internet safety regulator to figure out how to implement it. Tech giants like Meta and TikTok were dealt a blow on Friday after the commission issued a report stating that age verification “can be private, robust and effective,” though trials are ongoing about how to best make the law work, according to ABC News in Australia.

Source: Pornhub Back Online in France After Court Ruling About Age Verification

Nope. Age verification is easily broken and is a huge security / privacy risk.

Nintendo will record your Gamechat audio and video

Last month, ahead of the launch of the Switch 2 and its GameChat communication features, Nintendo updated its privacy policy to note that the company “may also monitor and record your video and audio interactions with other users.” Now that the Switch 2 has officially launched, we have a clearer understanding of how the console handles audio and video recorded during GameChat sessions, as well as when that footage may be sent to Nintendo or shared with partners, including law enforcement. Before using GameChat on Switch 2 for the first time, you must consent to a set of GameChat Terms displayed on the system itself. These terms warn that chat content is “recorded and stored temporarily” both on your system and the system of those you chat with. But those stored recordings are only shared with Nintendo if a user reports a violation of Nintendo’s Community Guidelines, the company writes.

That reporting feature lets a user “review a recording of the last three minutes of the latest three GameChat sessions” to highlight a particular section for review, suggesting that chat sessions are not being captured and stored in full. The terms also lay out that “these recordings are available only if the report is submitted within 24 hours,” suggesting that recordings are deleted from local storage after a full day. If a report is submitted to Nintendo, the company warns that it “may disclose certain information to third parties, such as authorities, courts, lawyers, or subcontractors reviewing the reported chats.” If you don’t consent to the potential for such recording and sharing, you’re prevented from using GameChat altogether.

Nintendo is extremely clear that the purpose of its recording and review system is “to protect GameChat users, especially minors” and “to support our ability to uphold our Community Guidelines.” This kind of human moderator review of chats is pretty common in the gaming world and can even apply to voice recordings made by various smart home assistants. […] Overall, the time-limited, local-unless-reported recordings Nintendo makes here seem like a minimal intrusion on the average GameChat user’s privacy. Still, if you’re paranoid about Nintendo potentially seeing and hearing what’s going on in your living room, it’s good to at least be aware of it.

Source: Nintendo Warns Switch 2 GameChat Users: ‘Your Chat Is Recorded’ (arstechnica.com)

The US Is Storing Migrant Children’s DNA in a Criminal Database

The United States government has collected DNA samples from upwards of 133,000 migrant children and teenagers—including at least one 4-year-old—and uploaded their genetic data into a national criminal database used by local, state, and federal law enforcement, according to documents reviewed by WIRED. The records, quietly released by the US Customs and Border Protection earlier this year, offer the most detailed look to date at the scale of CBP’s controversial DNA collection program. They reveal for the first time just how deeply the government’s biometric surveillance reaches into the lives of migrant children, some of whom may still be learning to read or tie their shoes—yet whose DNA is now stored in a system originally built for convicted sex offenders and violent criminals.

[…]

Spanning from October 2020 through the end of 2024, the records show that CBP swabbed the cheeks of between 829,000 and 2.8 million people, with experts estimating that the true figure, excluding duplicates, is likely well over 1.5 million. That number includes as many as 133,539 children and teenagers. These figures mark a sweeping expansion of biometric surveillance—one that explicitly targets migrant populations, including children.

[…]

Under current rules, DNA is generally collected from anyone who is also fingerprinted. According to DHS policy, 14 is the minimum age at which fingerprinting becomes routine.

[…]

“Taking DNA from a 4-year old and adding it into CODIS flies in the face of any immigration purpose,” she says, adding, “That’s not immigration enforcement. That’s genetic surveillance.”

Multiple studies show no link between immigration and increased crime.

In 2024, Glaberson coauthored a report called “Raiding the Genome” that was the first to try to quantify DHS’s 2020 expansion of DNA collection. It found that if DHS continues to collect DNA at the rate the agency itself projects, one-third of the DNA profiles in CODIS by 2034 will have been taken by DHS, and seemingly without any real due process—the protections that are supposed to be in place before law enforcement compels a person to hand over their most sensitive information.

Regeneron to Acquire all 23andMe genetic data for $256m

23andMe Holding Co. (“23andMe” or the “Company”) (OTC: MEHCQ), a leading human genetics and biotechnology company, today announced that it has entered into a definitive agreement for the sale of 23andMe to Regeneron Pharmaceuticals, Inc. (“Regeneron”) (NASDAQ: REGN), a leading U.S.-based, NASDAQ-listed biotechnology company that invents, develops and commercializes life-transforming medicines for people with serious diseases. The agreement includes Regeneron’s commitment to comply with the Company’s privacy policies and applicable law, process all customer personal data in accordance with the consents, privacy policies and statements, terms of service, and notices currently in effect and have security controls in place designed to protect such data.

[…]

Under the terms of the agreement, Regeneron will acquire substantially all of the assets of the Company, including the Personal Genome Service (PGS), Total Health and Research Services business lines, for a purchase price of $256 million. The agreement does not include the purchase of the Company’s Lemonaid Health subsidiary, which the Company plans to wind down in an orderly manner, subject to and in accordance with the agreement.

[…]

 

Source: Regeneron, A Leading U.S. Biotechnology Company, to Acquire

New Orleans police secretly used facial recognition on over 200 live camera feeds

New Orleans’ police force secretly used constant facial recognition to seek out suspects for two years. An investigation by The Washington Post discovered that the city’s police department was using facial recognition technology on a privately owned camera network to continually look for suspects. This application seems to violate a city ordinance passed in 2022 that required facial recognition only be used by the NOLA police to search for specific suspects of violent crimes and then to provide details about the scans’ use to the city council. However, WaPo found that officers did not reveal their reliance on the technology in the paperwork for several arrests where facial recognition was used, and none of those cases were included in mandatory city council reports.

“This is the facial recognition technology nightmare scenario that we have been worried about,” said Nathan Freed Wessler, an ACLU deputy director. “This is the government giving itself the power to track anyone — for that matter, everyone — as we go about our lives walking around in public.” Wessler added that the is the first known case in a major US city where police used AI-powered automated facial recognition to identify people in live camera feeds for the purpose of making immediate arrests.

Police use and misuse of surveillance technology has been thoroughly documented over the years. Although several US cities and states have placed restrictions on how law enforcement can use facial recognition, those limits won’t do anything to protect privacy if they’re routinely ignored by officers.

Read the full story on the New Orleans PD’s surveillance program at The Washington Post.

Source: New Orleans police secretly used facial recognition on over 200 live camera feeds

FBI Director Kash Patel Abruptly Closes Internal Watchdog Office Overseeing Surveillance Compliance

If there’s one thing the Federal Bureau of Investigation does well, it’s mass surveillance. Several years ago, then attorney general William Barr established an internal office to curb the FBI’s abuse of one controversial surveillance law. But recently, the FBI’s long-time hater (and, ironically, current director) Kash Patel shut down the watchdog group with no explanation.

On Tuesday, the New York Times reported that Patel suddenly closed the Office of Internal Auditing that Barr created in 2020. The office’s leader, Cindy Hall, abruptly retired. People familiar with the matter told the outlet that the closure of the aforementioned watchdog group alongside the Office of Integrity and Compliance are part of internal reorganization. Sources also reportedly said that Hall was trying to expand the office’s work, but her attempts to onboard new employees were stopped by the Trump administration’s hiring freezes.

The Office of Internal Auditing was a response to controversy surrounding the FBI’s use of Section 702 of the Foreign Intelligence Surveillance Act. The 2008 law primarily addresses surveillance of non-Americans abroad. However, Jeramie Scott, senior counselor at the Electronic Privacy Information Center, told Gizmodo via email that the FBI “has repeatedly abused its ability to search Americans’ communications ‘incidentally’ collected under Section 702” to conduct warrantless spying.

Patel has not released any official comment regarding his decision to close the office. But Elizabeth Goitein, senior director at the Brennan Center for Justice, told Gizmodo via email, “It is hard to square this move with Mr. Patel’s own stated concerns about the FBI’s use of Section 702.”

Last year, Congress reauthorized Section 702 despite mounting concerns over its misuses. Although Congress introduced some reforms, the updated legislation actually expanded the government’s surveillance capabilities. At the time, Patel slammed the law’s passage, stating that former FBI director Christopher Wray, who Patel once tried to sue, “was caught last year illegally using 702 collection methods against Americans 274,000 times.” (Per the New York Times, Patel is likely referencing a declassified 2023 opinion by the FISA court that used the Office of Internal Auditing’s findings to determine the FBI made 278,000 bad queries over several years.)

According to Goitein, the office has “played a key role in exposing FBI abuses of Section 702, including warrantless searches for the communication of members of Congress, judges, and protesters.” And ironically, Patel inadvertently drove its creation after attacking the FBI’s FISA applications to wiretap a former Trump campaign advisor in 2018 while investigating potential Russian election interference. Trump and his supporters used Patel’s attacks to push their own narrative dismissing any concerns. Last year, former representative Devin Nunes, who is now CEO of Truth Social, said Patel was “instrumental” to uncovering the “hoax and finding evidence of government malfeasance.”

Although Patel mostly peddled conspiracies, the Justice Department conducted a probe into the FBI’s investigation that raised concerns over “basic and fundamental errors” it committed. In response, Barr created the Office of Internal Auditing, stating, “What happened to the Trump presidential campaign and his subsequent Administration after the President was duly elected by the American people must never happen again.”

But since taking office, Patel has changed his tune about FISA. During his confirmation hearing, Patel referred to Section 702 as a “critical tool” and said, “I’m proud of the reforms that have been implemented and I’m proud to work with Congress moving forward to implement more.” However, reforms don’t mean much by themselves. As Goitein noted, “Without a separate office dedicated to surveillance compliance, [the FBI’s] abuses could go unreported and unchecked.”

[…]

Source: FBI Director Kash Patel Abruptly Closes Internal Watchdog Office Overseeing Surveillance Compliance

Russia to enforce location tracking app on all foreigners in Moscow

The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region.

The new proposal was announced by the chairman of the State Duma, Vyacheslav Volodin, who presented it as a measure to tackle migrant crimes.

“The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area,” stated Volodin.

Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information:

  • Residence location
  • Fingerprint
  • Face photograph
  • Real-time geo-location monitoring

“If migrants change their actual place of residence, they will be required to inform the Ministry of Internal Affairs (MVD) within three working days,” the high-ranking politician explained.

The measures will not apply to diplomats of foreign countries or citizens of Belarus.

Foreigners attempting to avoid their obligation in relation to the new law will be added to a registry of monitored individuals and deported from Russia.

Russian internet freedom observatory Roskomsvoboda’s reactions to this proposal reflect skepticism and concern.

Lawyer Anna Minushkina noted that the proposal violates Articles 23 and 24 of the Russian Constitution, guaranteeing the right to privacy.

President of the Uzbek Community in Moscow, Viktor Teplyankov, characterized the initiative as “ill-conceived and difficult to implement,” expressing doubts about its feasibility.

Finally, PSP Foundation’s Andrey Yakimov warned that such aggressive measures are bound to deter potential labor migrants, creating a different problem in the country.

The proposal hasn’t reached its final form yet, and specifics like what happens in the case of device theft/loss or similar technical or practical obstacles are to be addressed in the upcoming period during meetings between the Ministry and regional authorities.

The mass-surveillance experiment will run until September 2029, and if deemed successful, the mechanism will extend to cover more parts of the country.

Source: Russia to enforce location tracking app on all foreigners in Moscow

Google found not compliant with AVG when registering new accounts – sends the data to 70 services without user knowledge

According to a ruling by the Berlin Regional Court, Google must disclose to its users which of its more than 70 services process their data when they register for an account. The civil chamber thus upheld a lawsuit filed by the German Association of Consumer Organizations (vzbv). The consumer advocates had complained that neither the “express personalization” nor the alternative “manual personalization” complied with the legal requirements of the European General Data Protection Regulation (GDPR).
The ruling against Google Ireland Ltd. was handed down on March 25, 2025, but was only published on Friday (case number 15 O 472/22). The decision is not yet legally binding because the internet company has appealed the ruling. Google stated that it disagrees with the Regional Court’s decision.
What does Google process data for?
The consumer advocates argued that consumers must know what Google processes their data for when registering. Users must be able to freely decide how their data is processed. The judges at the Berlin Regional Court confirmed this legal opinion. The ruling states: “In this case, transparency is lacking simply because the defendant does not provide information about the individual Google services, Google apps, Google websites, or Google partners for which the data is to be used.” For this reason, the scope of consent is completely unknown to the user.
Google: Account creation has changed
Google stated that the ruling concerned an old account creation process that had since been changed. “What hasn’t changed is our commitment to enabling our users to use Google on their terms, with clear choices and control options based on extensive research, testing, and guidelines from European data protection authorities,” it stated. In the proceedings, Google argued that listing all services would result in excessively long text and harm transparency. This argument was rejected by the court. In the court’s view, information about the scope of consent is among the minimum details required by law. The regional court was particularly concerned that with “Express Personalization,” users only had the option of consenting to all data usage or canceling the process. A differentiated refusal was not possible. Even with “Manual Personalization,” consumers could not refuse the use of the German location.

Source: Landgericht Berlin: Google-Accounterstellung verletzte DSGVO | heise online

Google will pay Texas $1.4B to settle claims the company collected users’ data without permission

[…] “For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won.”

The agreement settles several claims Texas made against the search giant in 2022 related to geolocation, incognito searches and biometric data. The state argued Google was “unlawfully tracking and collecting users’ private data.”

Paxton claimed, for example, that Google collected millions of biometric identifiers, including voiceprints and records of face geometry, through such products and services as Google Photos and Google Assistant.

Google spokesperson José Castañeda said the agreement settles an array of “old claims,” some of which relate to product policies the company has already changed.

[…]

Texas previously reached two other key settlements with Google within the last two years, including one in December 2023 in which the company agreed to pay $700 million and make several other concessions to settle allegations that it had been stifling competition against its Android app store.

Meta has also agreed to a $1.4 billion settlement with Texas in a privacy lawsuit over allegations that the tech giant used users’ biometric data without their permission.

Source: Google will pay Texas $1.4B to settle claims the company collected users’ data without permission | AP News

US senator introduces bill calling for location-tracking on AI chips to limit China access

A U.S. senator introduced a bill on Friday that would direct the Commerce Department to require location verification mechanisms for export-controlled AI chips, in an effort to curb China’s access to advanced semiconductor technology.
Called the “Chip Security Act,” the bill calls for AI chips under export regulations, and products containing those chips, to be fitted with location-tracking systems to help detect diversion, smuggling or other unauthorized use of the product.
“With these enhanced security measures, we can continue to expand access to U.S. technology without compromising our national security,” Republican Senator Tom Cotton of Arkansas said.
The bill also calls for companies exporting the AI chips to report to the Bureau of Industry and Security if their products have been diverted away from their intended location or subject to tampering attempts.
The move comes days after U.S. President Donald Trump said he would rescind and modify a Biden-era rule that curbed the export of sophisticated AI chips with the goal of protecting U.S. leadership in AI and blocking China’s access.
U.S. Representative Bill Foster, a Democrat from Illinois, also plans to introduce a bill on similar lines in the coming weeks, Reuters reported on Monday.
Restricting China’s access to AI technology that could enhance its military capabilities has been a key focus for U.S. lawmakers and reports of widespread smuggling of Nvidia’s (NVDA.O)

Source: US senator introduces bill calling for location-tracking on AI chips to limit China access | Reuters

Of course it adds another layer of the US government spying on you if you want to buy a graphics card too. I’m not sure how anyone being able to track all your PCs does not compromise national security.

Neurotech companies are selling your brain data, senators warn

Three Democratic senators are sounding the alarm over brain-computer interface (BCI) technologies’ ability to collect — and potentially sell — our neural data. In a letter to the Federal Trade Commission (FTC), Sens. Chuck Schumer (D-NY), Maria Cantwell (D-IN), and Ed Markey (D-MA) called for an investigation into neurotechnology companies’ handling of user data, and for tighter regulations on their data-sharing policies.

“Unlike other personal data, neural data — captured directly from the human brain — can reveal mental health conditions, emotional states, and cognitive patterns, even when anonymized,” the letter reads. “This information is not only deeply personal; it is also strategically sensitive.”

While the concept of neural technologies may conjure up images of brain implants like Elon Musk’s Neuralink, there are far less invasive — and less regulated — neurotech products on the market, including headsets that help people meditate, purportedly trigger lucid dreaming, and promise to help users with online dating by helping them swipe through apps “based on your instinctive reaction.” These consumer products gobble up insights about users’ neurological data — and since they aren’t categorized as medical devices, the companies behind them aren’t barred from sharing that data with third parties.

“Neural data is the most private, personal, and powerful information we have—and no company should be allowed to harvest it without transparency, ironclad consent, and strict guardrails. Yet companies are collecting it with vague policies and zero transparency,” Schumer told The Verge via email.

The letter cites a 2024 report by the Neurorights Foundation, which found that most neurotech companies not only have few safeguards on user data but also have the ability to share sensitive information with third parties. The report looked at the data policies of 30 consumer-facing BCI companies and found that all but one “appear to have access to” users’ neural data, “and provide no meaningful limitations to this access.” The Neurorights Foundation only surveyed companies whose products are available to consumers without the help of a medical professional; implants like those made by Neuralink weren’t among them.

The companies surveyed by the Neurorights Foundation make it difficult for users to opt out of having their neurological data shared with third parties. Just over half the companies mentioned in the report explicitly let consumers revoke consent for data processing, and only 14 of the 30 give users the ability to delete their data. In some instances, user rights aren’t universal — for example, some companies only let users in the European Union delete their data but don’t grant the same rights to users elsewhere in the world.

To safeguard against potential abuses, the senators are calling on the FTC to:

  • investigate whether neurotech companies are engaging in unfair or deceptive practices that violate the FTC Act
  • compel companies to report on data handling, commercial practices, and third-party access
  • clarify how existing privacy standards apply to neural data
  • enforce the Children’s Online Privacy Protection Act as it relates to BCIs
  • begin a rulemaking process to establish safeguards for neural data, and setting limits on secondary uses like AI training and behavioral profiling
  • and ensure that both invasive and noninvasive neurotechnologies are subject to baseline disclosure and transparency standards, even when the data is anonymized

Though the senators’ letter calls out Neuralink by name, Musk’s brain implant tech is already subject to more regulations than other BCI technologies. Since Neuralink’s brain implant is considered a “medical” technology, it’s required to comply with the Health Insurance Portability and Accountability Act (HIPAA), which safeguards people’s medical data.

Stephen Damianos, the executive director of the Neurorights Foundation, said that HIPAA may not have entirely caught up to existing neurotechnologies, especially with regards to “informed consent” requirements.

“There are long-established and validated models for consent from the medical world, but I think there’s work to be done around understanding the extent to which informed consent is sufficient when it comes to neurotechnology,” Damianos told The Verge. “The analogy I like to give is, if you were going through my apartment, I would know what you would and wouldn’t find in my apartment, because I have a sense of what exactly is in there. But brain scans are overbroad, meaning they collect more data than what is required for the purpose of operating a device. It’s extremely hard — if not impossible — to communicate to a consumer or a patient exactly what can today and in the future be decoded from their neural data.”

Data collection becomes even trickier for “wellness” neurotechnology products, which don’t have to comply with HIPAA, even when they advertise themselves as helping with mental health conditions like depression and anxiety.

Damianos said there’s a “very hazy gray area” between medical devices and wellness devices.

“There’s this increasingly growing class of devices that are marketed for health and wellness as distinct from medical applications, but there can be a lot of overlap between those applications,” Damianos said. The dividing line is often whether a medical intermediary is required to help someone obtain a product, or whether they can “just go online, put in your credit card, and have it show up in a box a few days later.”

There are very few regulations on neurotechnologies advertised as being for “wellness.” In April 2024, Colorado passed the first-ever legislation protecting consumers’ neural data. The state updated its existing Consumer Protection Act, which protects users’ “sensitive data.” Under the updated legislation, “sensitive data” now includes “biological data” like biological, genetic, biochemical, physiological, and neural information. And in September, California amended its Consumer Privacy Act to protect neural data.

“We believe in the transformative potential of these technologies, and I think sometimes there’s a lot of doom and gloom about them,” Damianos told The Verge. “We want to get this moment right. We think it’s a really profound moment that has the potential to reshape what it means to be human. Enormous risks come from that, but we also believe in leveraging the potential to improve people’s lives.”

Source: Neurotech companies are selling your brain data, senators warn | The Verge

Employee monitoring app exposes 21M work screens​ to internet

A surveillance tool meant to keep tabs on employees is leaking millions of real-time screenshots onto the open web.

Your boss watching your screen isn’t the end of the story. Everyone else might be watching, too. Researchers at Cybernews have uncovered a major privacy breach involving WorkComposer, a workplace surveillance app used by over 200,000 people across countless companies.

The app, designed to track productivity by logging activity and snapping regular screenshots of employees’ screens, left over 21 million images exposed in an unsecured Amazon S3 bucket, broadcasting how workers go about their day frame by frame.

[…]

WorkComposer is one of many time-tracking tools that have crept into modern work culture. Marketed as a way to keep teams “accountable,” the software logs keystrokes, tracks how long you spend on each app, and snaps desktop screenshots every few minutes.

The leak shows just how dangerous this setup becomes when basic security hygiene is ignored. A leak of this magnitude turns everyday work activity into a goldmine for cybercriminals.

[…]

The leak’s real-time nature only amplifies the danger, as threat actors could monitor unfolding business operations as they happen, giving them access to otherwise locked-down environments.

[…]

Source: Employee monitoring app exposes 21M work screens​ | Cybernews

Can you imagine the time lost by employers going over all this data?

Microsoft Windows new built in spying tool Microsoft Recall is a really great idea too. Not.

A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information

[…]Yale New Haven Health (YNHHS), a massive nonprofit healthcare network in Connecticut. Hackers stole the data of more than 5.5 million individuals during an attack in March 2025.

[…]

According to a public notice on the YNHHS website, the organization discovered “unusual activity” on its system on March 8, 2025, which was later identified as unauthorized third-party access that allowed bad actors to copy certain patient data. While the information stolen varies by individual, it may include the following:

  • Name
  • Date of birth
  • Address
  • Phone number
  • Email address
  • Race
  • Ethnicity
  • Social Security number
  • Patient type
  • Medical record number

YNHHS says the breach did not include access to medical records, treatment information, or financial data (such as account and payment information).

[…]

Source: A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information | Lifehacker

Wait – race and ethnicity?!

Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads

CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads.

“That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you,” Srinivas said. “Because some of the prompts that people do in these AIs is purely work-related. It’s not like that’s personal.”

And work-related queries won’t help the AI company build an accurate-enough dossier.

“On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you,” he explained.

Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them.

“We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there,” he said.

The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said.

He’s not wrong, of course. Quietly following users around the internet helped Google become the roughly $2 trillion market cap company it is today.

[…]

Meta’s ad tracking technology, Pixels, which is embedded on websites across the internet, is how Meta gathers data, even on people that don’t have Facebook or Instagram accounts. Even Apple, which has marketed itself as a privacy protector, can’t resist tracking users’ locations to sell advertising in some of its apps by default.

On the other hand, this kind of thing has led people across the political spectrum in the U.S. and in Europe to distrust big tech.

The irony of Srinivas openly explaining his browser-tracking ad-selling ambitions this week also can’t be overstated.

Google is currently in court fighting the U.S. Department of Justice, which has alleged Google behaved in monopolistic ways to dominate search and online advertising. The DOJ wants the judge to order Google to divest Chrome.

Both OpenAI and Perplexity — not surprisingly, given Srinivas’ reasons — said they would buy the Chrome browser business if Google was forced to sell.

Source: Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads | TechCrunch

Yup, so even if Mozilla is making Firefox more invasive, it’s still better than these guys.

Blue Shield of California Exposed the Data of 4.7 Million People to Google for targeted advertising

Blue Shield of California shared the protected health information of 4.7 million individuals with Google over a nearly three-year period, a data breach that impacts the majority of its nearly 6 million members, according to reporting from Bleeping Computer.

This isn’t the only large data breach to affect a healthcare organization the last year alone. Community Health Center records were hacked in October 2024, compromising more than a million individuals’ data, along with an attack on lab testing company Lab Services Cooperative, which affected records of 1.6 million Planned Parenthood patients. UnitedHealth Group suffered a breach in February 2024, resulting in the leak of more than 100 million people’s data.

What happened with Blue Shield of California?

According to an April 9 notice posted on Blue Shield of California’s website, the company allowed certain data, including protected health information, to be shared with Google Ads through Google Analytics, which may have allowed Google to serve targeted ads back to members. While not discovered until Feb. 11, 2025, the leak occurred for several years, from April 2021 to January 2024, when the connection between Google Analytics and Google Ads was severed on Blue Shield websites.

The following Blue Shield member information may have been compromised:

  • Insurance plan name, type, and group number
  • City and zip code
  • Gender
  • Family size
  • Blue Shield assigned identifiers for online accounts
  • Medical claim service date and provider
  • Patient name
  • Patient financial responsibility
  • “Find a Doctor” search criteria and results

According to the notice, no additional personal data—Social Security numbers, driver’s license numbers, and banking and credit card information—were disclosed. Blue Shield also states that no bad actor was involved, nor have they confirmed that the information has been used maliciously.

[…]

Source: Blue Shield of California Exposed the Data of 4.7 Million People to Google | Lifehacker

Discord Wants Your Face: Begins Testing Facial Scans for Age Verification

Discord has begun requiring some users in the United Kingdom and Australia to verify their age through a facial scan before being permitted to access sensitive content. The chat app’s new process has been described as an “experiment,” and comes in response to laws passed in those countries that place guardrails on youth access to online platforms. Discord has also been the target of concerns that it does not sufficiently protect minors from sexual content.

Users may be asked to verify their age when encountering content that has been flagged by Discord’s systems as being sensitive in nature, or when they change their settings to enable access to sensitive content. The app will ask users to scan their face through a computer or smartphone webcam; alternatively, they can scan a driver’s license or other form of ID.

[…]

Source: Discord Begins Testing Facial Scans for Age Verification

Age verification is impossible to do correctly, incredibly privacy invasive and a really hacker tempting target. The UK and Australia and every other country considering age verification are seriously endangering their citizens.

Fortunately you can always hold up a picture from a magazine in front of the webcam.

Your TV is watching you better: LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions

LG TVs will soon leverage an AI model built for showing advertisements that more closely align with viewers’ personal beliefs and emotions. The company plans to incorporate a partner company’s AI tech into its TV software in order to interpret psychological factors impacting a viewer, such as personal interests, personality traits, and lifestyle choices. The aim is to show LG webOS users ads that will emotionally impact them.

The upcoming advertising approach comes via a multi-year licensing deal with Zenapse, a company describing itself as a software-as-a-service marketing platform that can drive advertiser sales “with AI-powered emotional intelligence.” LG will use Zenapse’s technology to divide webOS users into hyper-specific market segments that are supposed to be more informative to advertisers. LG Ad Solutions, LG’s advertising business, announced the partnership on Tuesday.

The technology will be used to inform ads shown on LG smart TVs’ homescreens, free ad-supported TV (FAST) channels, and elsewhere throughout webOS, per StreamTV Insider. LG will also use Zenapse’s tech to “expand new software development and go-to-market products,” it said. LG didn’t specify the duration of its licensing deal with Zenapse.

[…]

With all this information, ZenVision will group LG TV viewers into highly specified market segments, such as “goal-driven achievers,” “social connectors,” or “emotionally engaged planners,” an LG spokesperson told StreamTV Insider. Zenapse’s website for ZenVision points to other potential market segments, including “digital adopters,” “wellness seekers,” “positive impact & environment,” and “money matters.”

Companies paying to advertise on LG TVs can then target viewers based on the ZenVision-specified market segments and deliver an “emotionally intelligent ad,” as Zenapse’s website puts it.

This type of targeted advertising aims to bring advertisers more in-depth information about TV viewers than demographic data or even contextual advertising (which shows ads based on what the viewer is watching) via psychographic data. Demographic data gives advertisers viewer information, like location, age, gender, ethnicity, marital status, and income. Psychographic data is supposed to go deeper and allow advertisers to target people based on so-called psychological factors, like personal beliefs, values, and attitudes. As Salesforce explains, “psychographic segmentation delves deeper into their psyche” than relying on demographic data.

[…]

With their ability to track TV viewers’ behavior, including what they watch and search for on their TVs, smart TVs are a growing obsession for advertisers. As LG’s announcement pointed out, CTVs represent “one of the fastest-growing ad segments in the US, expected to reach over $40 billion by 2027, up from $24.6 billion in 2023.”

However, as advertisers’ interest in appealing to streamers grows, so do their efforts to track and understand viewers for more targeted advertising. Both efforts could end up pushing the limits of user comfort and privacy.

[…]

 

Source: LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions – Ars Technica

An LG TV is not exactly a cheap thing. I am paying for the whole product, not for a service. I bought a TV, not a marketing department.

Apple to Spy on User Emails and other Data on Devices to Bolster AI Technology

Apple Inc. will begin analyzing data on customers’ devices in a bid to improve its artificial intelligence platform, a move designed to safeguard user information while still helping it catch up with AI rivals.

Today, Apple typically trains AI models using synthetic data — information that’s meant to mimic real-world inputs without any personal details. But that synthetic information isn’t always representative of actual customer data, making it harder for its AI systems to work properly.

The new approach will address that problem while ensuring that user data remains on customers’ devices and isn’t directly used to train AI models. The idea is to help Apple catch up with competitors such as OpenAI and Alphabet Inc., which have fewer privacy restrictions.

The technology works like this: It takes the synthetic data that Apple has created and compares it to a recent sample of user emails within the iPhone, iPad and Mac email app. By using actual emails to check the fake inputs, Apple can then determine which items within its synthetic dataset are most in line with real-world messages.

These insights will help the company improve text-related features in its Apple Intelligence platform, such as summaries in notifications, the ability to synthesize thoughts in its Writing Tools, and recaps of user messages.

[…]

The company will roll out the new system in an upcoming beta version of iOS and iPadOS 18.5 and macOS 15.5. A second beta test of those upcoming releases was provided to developers earlier on Monday.

[…]

Already, the company has relied on a technology called differential privacy to help improve its Genmoji feature, which lets users create a custom emoji. It uses that system to “identify popular prompts and prompt patterns, while providing a mathematical guarantee that unique or rare prompts aren’t discovered,” the company said in the blog post.

The idea is to track how the model responds in situations where multiple users have made the same request — say, asking for a dinosaur carrying a briefcase — and improving the results in those cases.

The features are only for users who are opted in to device analytics and product improvement capabilities. Those options are managed in the Privacy and Security tab within the Settings app on the company’s devices.

[…]

Source: Apple to Analyze User Data on Devices to Bolster AI Technology

UK Effort to Keep Apple Encryption Fight Secret Is Blocked

A court has blocked a British government attempt to keep secret a legal case over its demand to access Apple Inc. user data in a victory for privacy advocates.

The UK Investigatory Powers Tribunal, a special court that handles cases related to government surveillance, said the authorities’ efforts were a “fundamental interference with the principle of open justice” in a ruling issued on Monday.

The development comes after it emerged in January that the British government had served Apple with a demand to circumvent encryption that the company uses to secure user data stored in its cloud services.

Apple challenged the request, while taking the unprecedented step of removing its advanced data protection feature for its British users. The government had sought to keep details about the demand — and Apple’s challenge of it — from being publicly disclosed.

[…]

Source: UK Effort to Keep Apple Encryption Fight Secret Is Blocked

EU: These are scary times – let’s backdoor encryption and make everyone unsafe!

The EU has shared its plans to ostensibly keep the continent’s denizens secure – and among the pages of bureaucratese are a few worrying sections that indicate the political union wants to backdoor encryption by 2026, or even sooner.

While the superstate has made noises about backdooring encryption before, the ProtectEU plan [PDF], launched on Monday, says the European Commission wants to develop a roadmap to allow “lawful and effective access to data for law enforcement in 2025” and a technology roadmap to do so by the following year.

“We are working on a roadmap now, and we will look at what is technically also possible,” said Henna Virkkunen, executive vice-president of the EC for tech sovereignty, security and democracy. “The problem is now that our law enforcement, they have been losing ground on criminals because our police investigators, they don’t have access to data,” she added.

“Of course, we want to protect the privacy and cyber security at the same time; and that’s why we have said here that now we have to prepare a technical roadmap to watch for that, but it’s something that we can’t tolerate, that we can’t take care of the security because we don’t have tools to work in this digital world.”

She claimed that in “85 percent” of police cases, law enforcement couldn’t access the data it needed. The proposal is to amend the existing Cybersecurity Act to allow these changes. You can watch the response below.

According to the document, the EC will set up a Security Research & Innovation Campus at its Joint Research Centre in 2026 to, somehow, work out the technical details. Since it’s impossible to backdoor encryption in a way that can’t be potentially exploited by others, it seems a very odd move to make if security’s your goal.

China, Russia, and the US certainly would spend a huge amount of time and money to find the backdoor. Even American law enforcement has given up on the cause of backdooring, although the UK still seems to be wedded to the idea.

In the meantime, for critical infrastructure (and presumably government communications), the EC wants to deploy quantum cryptography across the state. They want to get this in place by 2030 at the latest.

[…]

Source: EU: These are scary times – let’s backdoor encryption! • The Register

Proton may roll away from the Swiss

The EC’s not alone in proposing changes to privacy – new laws outlined in Switzerland could force privacy-focused groups such as Proton out of the country.

Under today’s laws, police can obtain data from services like Proton if they can get a court order for some crimes. But under the proposed laws a court order would not be required and that means Proton would leave the country, said cofounder Andy Yen.

“Swiss surveillance would be significantly stricter than in the US and the EU, and Switzerland would lose its competitiveness as a business location,” Proton’s cofounder told Swiss title Der Bund. “We feel compelled to leave Switzerland if the partial revision of the surveillance law planned by the Federal Council comes into force.”

The EU keeps banging away at this. They tried in 2018, 2020, 2021, 2023, 2024. And fortunately they keep getting stopped by people with enough brains to realise that you cannot have a safe backdoor. For security to be secure it needs to be unbreakable.

https://www.linkielist.com/?s=eu+encryption

 

T-Mobile SyncUP Bug Reveals Names, Images, and Locations of Random Children

T-Mobile sells a little-known GPS service called SyncUP, which allows users who are parents to monitor the locations of their children. This week, an apparent glitch in the service’s system obscured the locations of users’ own children while sending them detailed information and the locations of other, random children.

404 Media first reported on the extremely creepy bug, which appears to have impacted a large number of users. The outlet notes an outpouring of consternation and concern from web users on social platforms like Reddit and X, many of which claimed to have been impacted. 404 also interviewed one specific user, “Jenna,” who explained her ordeal with the bug:

Jenna, a parent who uses SyncUP to keep track of her three-year-old and six-year-old children, logged in Tuesday and instead of seeing if her kids had left school yet, was shown the exact, real-time locations of eight random children around the country, but not the locations of her own kids. 404 Media agreed to use a pseudonym for Jenna to protect the privacy of her kids.

“I’m not comfortable giving my six-year-old a phone, but he takes a school bus and I just want to be able to see where he is in real time,” Jenna said. “I had put a 500 meter boundary around his school, so I get an alert when he’s leaving.”

Jenna sent 404 Media a series of screenshots that show her logged into the app, as well as the locations of children located in other states. In the screenshots, the address-level location of the children are available, as is their name and the last time the location was updated.

Even more alarmingly, the woman interviewed by 404 claims that the company didn’t show much concern for the bug. “Jenna” says she called the company and was referred to an employee who told her that a ticket had been filed in the system on the issue’s behalf. A follow-up email from the concerned mother produced no response, she said.

[…]

When reached for comment by Gizmodo, a T-Mobile spokesperson told us: “Yesterday we fully resolved a temporary system issue with our SyncUP products that resulted from a planned technology update. We are in the process of understanding potential impacts to a small number of customers and will reach out to any as needed. We apologize for any inconvenience.”

The privacy implications of such a glitch are obvious and not really worth extrapolating on. That said, it’s also a good reminder that the more digital access you give a company, the more potential there is for that access to fall into the wrong hands.

Source: T-Mobile Bug Reveals Names, Images, and Locations of Random Children

Your TV is watching you watch and selling that data

[…]Your TV wants your data

The TV business traditionally included three distinct entities. There’s the hardware, namely the TV itself; the entertainment, like movies and shows; and the ads, usually just commercials that interrupt your movies and shows. In the streaming era, tech companies want to control all three, a setup also known as vertical integration. If, say, Roku makes the TV, supplies the content, and sells the ads, then it stands to control the experience, set the rates, and make the most money. That’s business!

Roku has done this very well. Although it was founded in 2002, Roku broke into the market in 2008 after Netflix invested $6 million in the company to make a set-top box that enabled any TV to stream Netflix content. It was literally called the Netflix Player by Roku. Over the course of the next 15 years, Roku would grow its hardware business to include streaming sticks, which are basically just smaller set-top-boxes; wireless soundbars, speakers, and subwoofers; and after licensing its operating system to third-party TV makers, its own affordable, Roku-branded smart TVs

[…]

The shift toward ad-supported everything has been happening across the TV landscape. People buy new TVs less frequently these days, so TV makers want to make money off the TVs they’ve already sold. Samsung has Samsung Ads, LG has LG Ad Solutions, Vizio has Vizio Ads, and so on and so forth. Tech companies, notably Amazon and Google, have gotten into the mix too, not only making software and hardware for TVs but also leveraging the massive amount of data they have on their users to sell ads on their TV platforms. These companies also sell data to advertisers and data brokers, all in the interest of knowing as much about you as possible in the interest of targeting you more effectively. It could even be used to train AI.

[…]

Is it possible to escape the ads?

Breaking free from this ad prison is tough. Most TVs on the market today come with a technology called automatic content recognition (ACR) built in. This is basically Shazam for TV — Shazam itself helped popularize the tech — and gives smart TV platforms the ability to monitor what you’re watching by either taking screenshots or capturing audio snippets while you’re watching. (This happens at the signal level, not from actual microphone recordings from the TV.)

Advertisers and TV companies use ACR tech to collect data about your habits that are otherwise hard to track, like if you watch live TV with an antenna. They use that data to build out a profile of you in order to better target ads. ACR also works with devices, like gaming consoles, that you plug into your TV through HDMI cables.

Yash Vekaria, a PhD candidate at UC Davis, called the HDMI spying “the most egregious thing we found” in his research for a paper published last year on how ACR technology works. And I have to admit that I had not heard of ACR until I came across Vekaria’s research.

[…]

Unfortunately, you don’t have much of a choice when it comes to ACR on your TV. You probably enabled the technology when you first set up your TV and accepted its privacy policy. If you refuse to do this, a lot of the functions on your TV won’t work. You can also accept the policy and then disable ACR on your TV’s settings, but that could disable certain features too. In 2017, Vizio settled a class-action lawsuit for tracking users by default. If you want to turn off this tracking technology, here’s a good guide from Consumer Reports that explains how for most types of smart TVs.

[…]

it does bug me, just on principle, that I have to let a tech company wiretap my TV in order to enjoy all of the device’s features.

[…]

Source: Roku’s Moana 2 controversy is part of a bigger ad problem | Vox

A Win for human rights: France Rejects Backdoor Mandate

In a moment of clarity after initially moving forward a deeply flawed piece of legislation, the French National Assembly has done the right thing: it rejected a dangerous proposal that would have gutted end-to-end encryption in the name of fighting drug trafficking. Despite heavy pressure from the Interior Ministry, lawmakers voted Thursday night (article in French) to strike down a provision that would have forced messaging platforms like Signal and WhatsApp to allow hidden access to private conversations.

The vote is a victory for digital rights, for privacy and security, and for common sense.

The proposed law was a surveillance wishlist disguised as anti-drug legislation. Tucked into its text was a resurrection of the widely discredited “ghost” participant model—a backdoor that pretends not to be one. Under this scheme, law enforcement could silently join encrypted chats, undermining the very idea of private communication. Security experts have condemned the approach, warning it would introduce systemic vulnerabilities, damage trust in secure communication platforms, and create tools ripe for abuse.

The French lawmakers who voted this provision down deserve credit. They listened—not only to French digital rights organizations and technologists, but also to basic principles of cybersecurity and civil liberties. They understood that encryption protects everyone, not just activists and dissidents, but also journalists, medical professionals, abuse survivors, and ordinary citizens trying to live private lives in an increasingly surveilled world.

A Global Signal

France’s rejection of the backdoor provision should send a message to legislatures around the world: you don’t have to sacrifice fundamental rights in the name of public safety. Encryption is not the enemy of justice; it’s a tool that supports our fundamental human rights, including the right to have a private conversation. It is a pillar of modern democracy and cybersecurity.

As governments in the U.S., U.K., Australia, and elsewhere continue to flirt with anti-encryption laws, this decision should serve as a model—and a warning. Undermining encryption doesn’t make society safer. It makes everyone more vulnerable.

[…]

Source: A Win for Encryption: France Rejects Backdoor Mandate | Electronic Frontier Foundation

China bans facial recognition without consent and in all public places. And it needs to be encrypted.

China’s Cyberspace Administration and Ministry of Public Security has outlawed the use of facial recognition without consent.

The two orgs last Friday published new rules on facial recognition and an explainer that spell out how orgs that want to use facial recognition must first conduct a “personal information protection impact assessment” that considers whether using the tech is necessary, impacts on individuals’ privacy, and risks of data leakage.

Organizations that decide to use facial recognition must data encrypt biometric data, and audit the information security techniques and practices they use to protect facial scans.

Chinese that go through that process and decide they want to use facial recognition can only do so after securing individuals’ consent.

The rules also ban the use of facial recognition equipment in public places such as hotel rooms, public bathrooms, public dressing rooms, and public toilets.

The measures don’t apply to researchers or to what machine translation of the rules describes as “algorithm training activities” – suggesting images of citizens’ faces are fair game when used to train AI models.

The documents linked to above don’t mention whether government agencies are exempt from the new rules. The Register fancies Beijing will keep using facial recognition whenever it wants to as its previously expressed interest in a national identity scheme that uses the tech, and used it to identify members of ethnic minorities.

Source: China bans facial recognition in hotels, bathrooms • The Register

23andMe files for bankruptcy: How to delete your data before it’s sold off

23andMe has capped off a challenging few years by filing for Chapter 11 bankruptcy today. Given the uncertainty around the future of the DNA testing company and what will happen to all of the genetic data it has collected, now is a critical time for customers to protect their privacy. California Attorney General Rob Bonta has recommended that past customers of the genetic testing business delete their information as a precautionary measure. Here are the steps to deleting your records with 23andMe.

  1. Log into your 23andMe account.
  2. Go to the “Settings” tab of your profile.
  3. Click View on the section called “23andMe Data.”
  4. If you want to retain a copy for your own records, download your data now.
  5. Go to the “Delete Data” section
  6. Click “Permanently Delete Data.”
  7. You will receive an email from 23andMe confirming the action. Click the link in that email to complete the process.

While the majority of an individual’s personal information will be deleted, 23andMe does keep some information for legal compliance. The details are in the company’s privacy policy.

There are a few other privacy-minded actions customers can take. First, anyone who opted to have 23andMe store their saliva and DNA can request that the sample be destroyed. That choice can be made from the Preferences tab of the account settings menu. Second, you can review whether you granted permission for your genetic data and sample to be used in scientific research. The allowance can also be checked, and revoked if you wish, from the account settings page; it’s listed under Research and Product Consents.

Source: How to delete your 23andMe data