Windows 11 is now automatically enabling OneDrive folder backup without asking permission

Microsoft has made OneDrive slightly more annoying for Windows 11 users. Quietly and without any announcement, the company changed Windows 11’s initial setup so that it could turn on the automatic folder backup without asking for it.

Now, those setting up a new Windows computer the way Microsoft wants them to (in other words, connected to the internet and signed into a Microsoft account) will get to their desktops with OneDrive already syncing stuff from folders like Desktop Pictures, Documents, Music, and Videos. Depending on how much is stored there, you might end up with a desktop and other folders filled to the brim with shortcuts to various stuff right after finishing a clean Windows installation.

Automatic folder backup in OneDrive is a very useful feature when used properly and when the user deliberately enables it. However, Microsoft decided that sending a few notification prompts to enable folder backup was not enough, so it just turned the feature on without asking anybody or even letting users know about it, resulting in a flood of Reddit posts about users complaining about what the hell are those green checkmarks next to files and shortcuts on their desktops.

If you do not want your computer to back up everything on your desktop or other folders, here is how to turn the feature off (you can also set up Windows 11 in offline mode):

  1. Right-click the OneDrive icon in the tray area, click the settings icon and then press Settings.
  2. Go to the “Sync and Backup” tab and click “Manage backup.”
  3. Turn off all the folders you do not want to back up in OneDrive and confirm the changes.
  4. If you have an older OneDrive version with the classic tabbed interface, go to the Backup tab and click Manage Backup > Stop backup > Stop backup.

Microsoft is no stranger to shady tricks with its software and operating system. Several months ago, we noticed that OneDrive would not let you close it without you explaining the reason first (Microsoft later reverted that stupid change). A similar thing was also spotted in the Edge browser, with Microsoft asking users why they downloaded Chrome.

As a reminder, you can always just uninstall OneDrive and call it a day.

Source: Windows 11 is now automatically enabling OneDrive folder backup without asking permission – Neowin

Microsoft Account to local account conversion guide erased from official Windows 11 guide

Microsoft has been pushing hard for its users to sign into Windows with a Microsoft Account. The newest Windows 11 installer removed the easy bypass to the requirement that you make an account or login with your existing account. If you didn’t install Windows 11 without a Microsoft Account and now want to stop sending the company your data, you can still switch to a local account after the fact. Microsoft even had instructions on how to do this on its official support website – or at least it used to…

Microsoft’s ‘Change from a local account to a Microsoft Account’ guide shows users how they can change their Windows 11 PC login credentials to use their Microsoft Account. The company also supplied instructions on how to ‘Change from a Microsoft account to a local account’ on the same page. However, when we checked the page using the Wayback Machine, the instructions on how to do the latter appeared on June 12, 2024, then disappeared on June 17, 2024. The ‘Change from a Microsoft account to a local account’ instructions yet haven’t returned.

Converting your Windows 11 PC’s login from a Microsoft Account to a local account is a pretty simple process. All you have to do is go to the Settings app, proceed to Accounts > Your info, and select “Sign in with a local account instead.” Follow the instructions on the screen, and you should be good to go.

[…]

It’s apparent that Microsoft really wants users to sign up and use their services, much like how Google and Apple make you create an account so you can make full use of your Android or iDevice. While Windows 11 still lets you use the OS with a local account, these developments show that Microsoft wants this option to be inaccessible, at least for the average consumer.

Source: Microsoft Account to local account conversion guide erased from official Windows 11 guide — instructions redacted earlier this week | Tom’s Hardware

EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat

Automated license plate readers “pose risks to public safety,” argues the EFF, “that may outweigh the crimes they are attempting to address in the first place.” When law enforcement uses automated license plate readers (ALPRs) to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats. The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and “fingerprinting” their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions’ Vigilant ALPRs, including missing encryption and insufficiently protected credentials…

Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems… Because drivers don’t have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology. It’s a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that public safety agencies “are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel,” even though “the potential for harm from external factors is substantial.” That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks “targeting U.S. public safety organizations increased by 142 percent” in 2023.

Yet, the temptation to “collect it all” continues to overshadow the responsibility to “protect it all.” What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs… If there’s one positive thing we can say about the latest Vigilant vulnerability disclosures, it’s that for once a government agency identified and reported the vulnerabilities before they could do damage… The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities…

But a data breach isn’t the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife.

The article concludes that public safety agencies should “collect only the data they need for actual criminal investigations.

“They must never store more data than they adequately protect within their limited resources-or they must keep the public safe from data breaches by not collecting the data at all.”

Source: EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat

EU delays decision over continuous spying on all your devices *cough* scanning encrypted messages for kiddie porn

European Union officials have delayed talks over proposed legislation that could lead to messaging services having to scan photos and links to detect possible child sexual abuse material (CSAM). Were the proposal to become law, it may require the likes of WhatsApp, Messenger and Signal to scan all images that users upload — which would essentially force them to break encryption.

For the measure to pass, it would need to have the backing of at least 15 of the member states representing at least 65 percent of the bloc’s entire population. However, countries including Germany, Austria, Poland, the Netherlands and the Czech Republic were expected to abstain from the vote or oppose the plan due to cybersecurity and privacy concerns, Politico reports. If EU members come to an agreement on a joint position, they’ll have to hash out a final version of the law with the European Commission and European Parliament.

The legislation was first proposed in 2022 and it could result in messaging services having to scan all images and links with the aim of detecting CSAM and communications between minors and potential offenders. Under the proposal, users would be informed about the link and image scans in services’ terms and conditions. If they refused, they would be blocked from sharing links and images on those platforms. However, as Politico notes, the draft proposal includes an exemption for “accounts used by the State for national security purposes.”

[…]

Patrick Breyer, a digital rights activist who was a member of the previous European Parliament before this month’s elections, has argued that proponents of the so-called “chat control” plan aimed to take advantage of a power vacuum before the next parliament is constituted. Breyer says that the delay of the vote, prompted in part by campaigners, “should be celebrated,” but warned that “surveillance extremists among the EU governments” could again attempt to advance chat control in the coming days.

Other critics and privacy advocates have slammed the proposal. Signal president Meredith Whittaker said in a statement that “mass scanning of private communications fundamentally undermines encryption,” while Edward Snowden described it as a “terrifying mass surveillance measure.”

[…]

The EU is not the only entity to attempt such a move. In 2021, Apple revealed a plan to scan iCloud Photos for known CSAM. However, it scrapped that controversial effort following criticism from the likes of customers, advocacy groups and researchers.

Source: EU delays decision over scanning encrypted messages for CSAM

Watch out very very carefully  as soon as people start taking your freedoms in the name of “protecting children”.

FedEx’s Secretive Police Force Is Helping Cops Build An AI Car Surveillance Network

[…] Forbes has learned the shipping and business services company is using AI tools made by Flock Safety, a $4 billion car surveillance startup, to monitor its distribution and cargo facilities across the United States. As part of the deal, FedEx is providing its Flock surveillance feeds to law enforcement, an arrangement that Flock has with at least four multi-billion dollar private companies. But publicly available documents reveal that some local police departments are also sharing their Flock feeds with FedEx — a rare instance of a private company availing itself of a police surveillance apparatus.

To civil rights activists, such close collaboration has the potential to dramatically expand Flock’s car surveillance network, which already spans 4,000 cities across over 40 states and some 40,000 cameras that track vehicles by license plate, make, model, color and other identifying characteristics, like dents or bumper stickers. Lisa Femia, staff attorney at the Electronic Frontier Foundation, said because private entities aren’t subject to the same transparency laws as police, this sort of arrangement could “[leave] the public in the dark, while at the same time expanding a sort of mass surveillance network.”

[…]

It’s unclear just how widely law enforcement is sharing Flock data with FedEx. According to publicly available lists of data sharing partners, two police departments have granted the FedEx Air Carrier Police Department access to their Flock cameras: Shelby County Sheriff’s Office in Tennessee and Pittsboro Police Department in Indiana.

Shelby County Sheriff’s Office public information officer John Morris confirmed the collaboration. “We share reads from our Flock license plate readers with FedEx in the same manner we share the data with other law enforcement agencies, locally, regionally, and nationally,” he told Forbes via email.

[…]

FedEx is also sharing its Flock camera feeds with other police departments, including the Greenwood Police Department in Indiana, according to Matthew Fillenwarth, assistant chief at the agency. Morris at Shelby County Sheriff’s Office confirmed his department had access to FedEx’s Flock feeds too. Memphis Police Department said it received surveillance camera feeds from FedEx through its Connect Memphis system

[…]

Flock, which was founded in 2017, has raised more than $482 million in venture capital investment from the likes of Andreessen Horowitz, helping it expand its vast network of cameras across America through both public police department contracts and through more secretive agreements with private businesses.

Forbes has now uncovered at least four corporate giants using Flock, none of which had publicly disclosed contracts with the surveillance startup. As Forbes previously reported, $50 billion-valued Simon Property, the country’s biggest mall owner, and home improvement giant Lowe’s, are two of the biggest clients. Like FedEx, Simon Property also has provided its mall feeds to local cops.

[…]

Kaiser Permanente, the largest health insurance company in America, has shared Flock data with the Northern California Regional Intelligence Center, an intelligence hub that provides support to local and federal police investigating major crimes across California’s west coast

[…]

Flock’s senior vice president of policy and communications Joshua Thomas declined to comment on private customers. “Flock’s technology and tools help our customers bolster their public safety efforts by helping to deter and solve crime efficiently and objectively,” Thomas said. “Objective video evidence is crucial to solving crime and we support our customers sharing that evidence with those that they are legally allowed to do so with.”

He said Flock was helping to solve “thousands of crimes nationwide” and is working toward its “goal of leveraging technology to eliminate crime.” Forbes previously found that Flock’s marketing data had exaggerated its impact on crime rates and that the company had itself likely broken the law across various states by installing cameras without the right permits.

Source: FedEx’s Secretive Police Force Is Helping Cops Build An AI Car Surveillance Network

Signal, MEPs urge EU Council to drop law that puts a spy on everyone’s devices

On Thursday, the EU Council is scheduled to vote on a legislative proposal that would attempt to protect children online by disallowing confidential communication.

The vote had been set for Wednesday but got pushed back [PDF].

Known to detractors as Chat Control, the proposal seeks to prevent the online dissemination of child sexual abuse material (CSAM) by requiring internet service providers to scan digital communication – private chats, emails, social media messages, and photos – for unlawful content.

The proposal [PDF], recognizing the difficulty of explicitly outlawing encryption, calls for “client-side scanning” or “upload moderation” – analyzing content on people’s mobile devices and computers for certain wrongdoing before it gets encrypted and transmitted.

The idea is that algorithms running locally on people’s devices will reliably recognize CSAM (and whatever else is deemed sufficiently awful), block it, and/or report it to authorities. This act of automatically policing and reporting people’s stuff before it’s even had a chance to be securely transferred rather undermines the point of encryption in the first place.

We’ve been here before. Apple announced plans to implement a client-side scanning scheme back in August 2021, only to face withering criticism from the security community and civil society groups. In late 2021, the iGiant essentially abandoned the idea.

Europe’s planned “regulation laying down rules to prevent and combat child sexual abuse” is not the only legislative proposal that contemplates client-side scanning as a way to front-run the application of encryption. The US Earn-It Act imagines something similar.

In the UK, the Online Safety Act of 2023 includes a content scanning requirement, though with the government’s acknowledgement that enforcement isn’t presently feasible. While it does allow telecoms regulator Ofcom to require online platforms to adopt an “accredited technology” to identify unlawful content, there is currently no such technology and it’s unclear how accreditation would work.

With the EU proposal vote approaching, opponents of the plan have renewed their calls to shelve the pre-crime surveillance regime.

In an open letter [PDF] on Monday, Meredith Whittaker, CEO of Signal, which threatened to withdraw its app from the UK if the Online Safety Act disallowed encryption, reiterated why the EU client-side scanning plan is unworkable and dangerous.

“There is no way to implement such proposals in the context of end-to-end encrypted communications without fundamentally undermining encryption and creating a dangerous vulnerability in core infrastructure that would have global implications well beyond Europe,” wrote Whittaker.

European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label

“Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games.

“They’ve come back to the table with the same idea under a new label. Instead of using the previous term ‘client-side scanning,’ they’ve rebranded and are now calling it ‘upload moderation.’

“Some are claiming that ‘upload moderation’ does not undermine encryption because it happens before your message or video is encrypted. This is untrue.”

The Internet Architecture Board, part of the Internet Engineering Task Force, offered a similar assessment of client-side scanning in December.

Encrypted comms service Threema published its open variation on this theme on Monday, arguing that mass surveillance is incompatible with democracy, is ineffective, and undermines data security.

“Should it pass, the consequences would be devastating: Under the pretext of child protection, EU citizens would no longer be able to communicate in a safe and private manner on the internet,” the biz wrote.

EU citizens would no longer be able to communicate in a safe and private manner on the internet

“The European market’s location advantage would suffer a massive hit due to a substantial decrease in data security. And EU professionals like lawyers, journalists, and physicians could no longer uphold their duty to confidentiality online. All while children wouldn’t be better protected in the least bit.”

Threema said if it isn’t allowed to offer encryption, it will leave the EU.

And on Tuesday, 37 Members of Parliament signed an open letter to the Council of Europe urging legislators to reject Chat Control.

“We explicitly warn that the obligation to systematically scan encrypted communication, whether called ‘upload-moderation’ or ‘client-side scanning,’ would not only break secure end-to-end encryption, but will to a high probability also not withstand the case law of the European Court of Justice,” the MEPs said. “Rather, such an attack would be in complete contrast to the European commitment to secure communication and digital privacy, as well as human rights in the digital space.” ®

Source: Signal, MEPs urge EU Council to drop encryption-eroding law • The Register

Hey, EU, stop spying on us! We are supposed to be the free ones here.

Sonos draws more customer anger — this time for its privacy policy. Now they will sell your customer data, apparently

It’s been a rocky couple of months for Sonos — so much so that CEO Patrick Spence now has a canned autoreply for customers emailing him to vent about the redesigned app. But as the company works to right the ship, restore trust, and get the new Sonos Ace headphones off to a strong start, it finds itself in the middle of yet another controversy.

As highlighted by repair technician and consumer privacy advocate Louis Rossmann, Sonos has made a significant change to its privacy policy, at least in the United States, with the removal of one key line. The updated policy no longer contains a sentence that previously said, “Sonos does not and will not sell personal information about our customers.” That pledge is still present in other countries, but it’s nowhere to be found in the updated US policy, which went into effect earlier this month.

Now, some customers, already feeling burned by the new Sonos app’s unsteady performance, are sounding off about what they view as another poor decision from the company’s leadership. For them, it’s been one unforced error after another from a brand they once recommended without hesitation.

[…]

As part of its reworked app platform, Sonos rolled out web-based access for all customer systems — giving the cloud an even bigger role in the company’s architecture. Unfortunately, the web app currently lacks any kind of two-factor authentication, which has also irked users; all it takes is an email address and password to remotely control Sonos devices.

[…]

Source: Sonos draws more customer anger — this time for its privacy policy – The Verge

If I had an “idiocy” tag, I would have used it for these bozo’s.

Google Leak Reveals Thousands of Privacy Incidents

Google has accidentally collected childrens’ voice data, leaked the trips and home addresses of car pool users, and made YouTube recommendations based on users’ deleted watch history, among thousands of other employee-reported privacy incidents, according to a copy of an internal Google database which tracks six years worth of potential privacy and security issues obtained by 404 Media. From the report: Individually the incidents, most of which have not been previously publicly reported, may only each impact a relatively small number of people, or were fixed quickly. Taken as a whole, though, the internal database shows how one of the most powerful and important companies in the world manages, and often mismanages, a staggering amount of personal, sensitive data on people’s lives.

The data obtained by 404 Media includes privacy and security issues that Google’s own employees reported internally. These include issues with Google’s own products or data collection practices; vulnerabilities in third party vendors that Google uses; or mistakes made by Google staff, contractors, or other people that have impacted Google systems or data. The incidents include everything from a single errant email containing some PII, through to substantial leaks of data, right up to impending raids on Google offices. When reporting an incident, employees give the incident a priority rating, P0 being the highest, P1 being a step below that. The database contains thousands of reports over the course of six years, from 2013 to 2018. In one 2016 case, a Google employee reported that Google Street View’s systems were transcribing and storing license plate numbers from photos. They explained that Google uses an algorithm to detect text in Street View imagery.

Source: https://tech.slashdot.org/story/24/06/03/1655212/google-leak-reveals-thousands-of-privacy-incidents?utm_source=rss1.0mainlinkanon&utm_medium=feed

Top EU court says there is no right to online anonymity, because copyright is more important

A year ago, Walled Culture wrote about an extremely important case that was being considered by the Court of Justice of the European Union (CJEU), the EU’s top court. The central question was whether the judges considered that copyright was more important than privacy. The bad news is that the CJEU has just decided that it is:

The Court, sitting as the Full Court, holds that the general and indiscriminate retention of IP addresses does not necessarily constitute a serious interference with fundamental rights.

IP addresses refer to the identifying Internet number assigned to a user’s system when it is online. That may change each time someone uses the Internet, but if Internet Service Providers are required by law to retain information about who was assigned a particular address at a given time, then it is possible to carry out routine surveillance of people’s online activities. The CJEU has decided this is acceptable:

EU law does not preclude national legislation authorising the competent public authority, for the sole purpose of identifying the person suspected of having committed a criminal offence, to access the civil identity data associated with an IP address

The key problem is that copyright infringement by a private individual is regarded by the court as something so serious that it negates the right to privacy. It’s a sign of the twisted values that copyright has succeeded on imposing on many legal systems. It equates the mere copying of a digital file with serious crimes that merit a prison sentence, an evident absurdity.

As one of the groups that brought the original case, La Quadrature du Net, writes, this latest decision also has serious negative consequences for human rights in the EU:

Whereas in 2020, the CJEU considered that the retention of IP addresses constituted a serious interference with fundamental rights and that they could only be accessed, together with the civil identity of the Internet user, for the purpose of fighting serious crime or safeguarding national security, this is no longer true. The CJEU has reversed its reasoning: it now considers that the retention of IP addresses is, by default, no longer a serious interference with fundamental rights, and that it is only in certain cases that such access constitutes a serious interference that must be safeguarded with appropriate protection measures.

As a result, La Quadrature du Net says:

While in 2020 [the CJEU] stated that there was a right to online anonymity enshrined in the ePrivacy Directive, it is now abandoning it. Unfortunately, by giving the police broad access to the civil identity associated with an IP address and to the content of a communication, it puts a de facto end to online anonymity.

This is a good example of how copyright’s continuing obsession with ownership and control of digital material is warping the entire legal system in the EU. What was supposed to be simply a fair way of rewarding creators has resulted in a monstrous system of routine government surveillance carried out on hundreds of millions of innocent people just in case they copy a digital file.

Source: Top EU court says there is no right to online anonymity, because copyright is more important – Walled Culture

FCC fines America’s largest wireless carriers $200 million for selling customer location data without permission

The Federal Communications Commission has slapped the largest mobile carriers in the US with a collective fine worth $200 million for selling access to their customers’ location information without consent. AT&T was ordered to pay $57 million, while Verizon has to pay $47 million. Meanwhile, Sprint and T-Mobile are facing a penalty with a total amount of $92 million together, since the companies had merged two years ago. The FCC conducted an in-depth investigation into the carriers’ unauthorized disclosure and sale of subscribers’ real-time location data after their activities came to light in 2018.

To sum up the practice in the words of FCC Commissioner Jessica Rosenworcel: The carriers sold “real-time location information to data aggregators, allowing this highly sensitive data to wind up in the hands of bail-bond companies, bounty hunters, and other shady actors.” According to the agency, the scheme started to unravel following public reports that a sheriff in Missouri was tracking numerous individuals by using location information a company called Securus gets from wireless carriers. Securus provides communications services to correctional facilities in the country.

While the carriers eventually ceased their activities, the agency said they continued operating their programs for a year after the practice was revealed and after they promised the FCC that they would stop selling customer location data. Further, they carried on without reasonable safeguards in place to ensure that the legitimate services using their customers’ information, such as roadside assistance and medical emergency services, truly are obtaining users’ consent to track their locations.

Source: FCC fines America’s largest wireless carriers $200 million for selling customer location data

Helldivers 2 PC players suddenly have to link to a PSN account and they’re not being chill about it

Nintendo sent a Digital Millennium Copyright Act (DMCA) notice for over 8,000 GitHub repositories hosting code from the Yuzu Switch emulator, which the Zelda maker previously described as enabling “piracy at a colossal scale.” The sweeping takedown comes two months after Yuzu’s creators quickly settled a lawsuit with Nintendo and its notoriously trigger-happy legal team for $2.4 million.

GamesIndustry.biz first reported on the DMCA notice, affecting 8,535 GitHub repos. Redacted entities representing Nintendo assert that the Yuzu source code contained in the repos “illegally circumvents Nintendo’s technological protection measures and runs illegal copies of Switch games.”

GitHub wrote on the notice that developers will have time to change their content before it’s disabled. In keeping with its developer-friendly approach and branding, the Microsoft-owned platform also offered legal resources and guidance on submitting DMCA counter-notices.

Nintendo’s legal blitz, perhaps not coincidentally, comes as game emulators are enjoying a resurgence. Last month, Apple loosened its restrictions on retro game players in the App Store (likely in response to regulatory threats), leading to the Delta emulator establishing itself as the de facto choice and reaching the App Store’s top spot. Nintendo may have calculated that emulators’ moment in the sun threatened its bottom line and began by squashing those that most immediately imperiled its income stream.

Sadly, Nintendo’s largely undefended legal assault against emulators ignores a crucial use for them that isn’t about piracy. Game historians see the software as a linchpin of game preservation. Without emulators, Nintendo and other copyright holders could make a part of history obsolete for future generations, as their corresponding hardware will eventually be harder to come by.

[…]

This has royally pissed off PC players, though it’s worth noting that it’s free to make a PSN account. This has led to review bombing on Steam and many promises to abandon the game when the linking becomes a requirement, according to a report by Kotaku. The complaints range from frustration over adding yet another barrier to entry after downloading an 80GB game to fears that the PSN account would likely be hacked. While it is true that Sony was the target of a huge hack that impacted 77 million PSN accounts, that was back in 2011. Obama was still in his first term. Also worth noting? Steam was hacked in 2011, impacting 35 million accounts.

[…]

Source: Helldivers 2 PC players suddenly have to link to a PSN account and they’re not being chill about it

People Are Slowly Realizing Their Auto Insurance Rates Are Skyrocketing Because Their Car Is Covertly Spying On Them

Last month the New York Times’ Kashmir Hill published a major story on how GM collects driver behavior data then sells access (through LexisNexis) to insurance companies, which will then jack up your rates.

The absolute bare minimum you could could expect from the auto industry here is that they’re doing this in a way that’s clear to car owners. But of course they aren’t; they’re burying “consent” deep in the mire of some hundred-page end user agreement nobody reads, usually not related to the car purchase — but the apps consumers use to manage roadside assistance and other features.

Since Kashmir’s story was published, she says she’s been inundated with complaints by consumers about similar behavior. She’s even discovered that she’s one of the folks GM spied on and tattled to insurers about. In a follow up story, she recounts how she and her husband bought a Chevy Bolt, were auto-enrolled in a driver assistance program, then had their data (which they couldn’t access) sold to insurers.

GM’s now facing 10 different federal lawsuits from customers pissed off that they were surreptitiously tracked and then forced to pay significantly more for insurance:

“In 10 federal lawsuits filed in the last month, drivers from across the country say they did not knowingly sign up for Smart Driver but recently learned that G.M. had provided their driving data to LexisNexis. According to one of the complaints, a Florida owner of a 2019 Cadillac CTS-V who drove it around a racetrack for events saw his insurance premium nearly double, an increase of more than $5,000 per year.”

GM (and some apologists) will of course proclaim that this is only fair that reckless drivers pay more, but that’s generally not how it works. Pressured for unlimited quarterly returns, insurance companies will use absolutely anything they find in the data to justify rising rates.

[…]

Automakers — which have long had some of the worst privacy reputations in all of tech — are one of countless industries that lobbied relentlessly for decades to ensure Congress never passed a federal privacy law or regulated dodgy data brokers. And that the FTC — the over-burdened regulator tasked with privacy oversight — lacks the staff, resources, or legal authority to police the problem at any real scale.

The end result is just a parade of scandals. And if Hill were so inclined, she could write a similar story about every tech sector in America, given everything from your smart TV and electricity meter to refrigerator and kids’ toys now monitor your behavior and sell access to those insights to a wide range of dodgy data broker middlemen, all with nothing remotely close to ethics or competent oversight.

And despite the fact that this free for all environment is resulting in no limit of dangerous real-world harms, our Congress has been lobbied into gridlock by a cross-industry coalition of companies with near-unlimited budgets, all desperately hoping that their performative concerns about TikTok will distract everyone from the fact we live in a country too corrupt to pass a real privacy law.

Source: People Are Slowly Realizing Their Auto Insurance Rates Are Skyrocketing Because Their Car Is Covertly Spying On Them | Techdirt

Ring Spy Doorbell customers get measly $5.6 million in refunds in privacy settlement

In a 2023 complaint, the FTC accused the doorbell camera and home security provider of allowing its employees and contractors to access customers’ private videos. Ring allegedly used such footage to train algorithms without consent, among other purposes.

Ring was also charged with failing to implement key security protections, which enabled hackers to take control of customers’ accounts, cameras and videos. This led to “egregious violations of users’ privacy,” the FTC noted.

The resulting settlement required Ring to delete content that was found to be unlawfully obtained, establish stronger security protections

[…]

the FTC is sending 117,044 PayPal payments to impacted consumers who had certain types of Ring devices — including indoor cameras — during the timeframes that the regulators allege unauthorized access took place.

[…]

Earlier this year, the California-based company separately announced that it would stop allowing police departments to request doorbell camera footage from users, marking an end to a feature that had drawn criticism from privacy advocates.

Source: Ring customers get $5.6 million in refunds in privacy settlement | AP News

Considering the size of Ring and the size of the customer base, this is a very very light tap on the wrist for delivering poor security and something that spies on everything on the street.

Europol asks tech firms, governments to unencrypt your private messages

In a joint declaration of European police chiefs published over the weekend, Europol said it needs lawful access to private messages, and said tech companies need to be able to scan them (ostensibly impossible with E2EE implemented) to protect users. Without such access, cops fear they won’t be able to prevent “the most heinous of crimes” like terrorism, human trafficking, child sexual abuse material (CSAM), murder, drug smuggling and other crimes.

“Our societies have not previously tolerated spaces that are beyond the reach of law enforcement, where criminals can communicate safely and child abuse can flourish,” the declaration said. “They should not now.”

Not exactly true – most EU countries do not tolerate anyone opening your private (snail) mail without a warrant.

The joint statement, which was agreed to in cooperation with the UK’s National Crime Agency, isn’t exactly making a novel claim. It’s nearly the same line of reasoning that the Virtual Global Taskforce, an international law enforcement group founded in 2003 to combat CSAM online, made last year when Meta first first started talking about implementing E2EE on Messenger and Instagram.

While not named in this latest declaration itself [PDF], Europol said that its opposition to E2EE “comes as end-to-end encryption has started to be rolled out across Meta’s messenger platform.” The UK NCA made a similar statement in its comments on the Europol missive released over the weekend.

The declaration urges the tech industry not to see user privacy as a binary choice, but rather as something that can be assured without depriving law enforcement of access to private communications.

Not really though. And if law enforcement can get at it, then so can everyone else.

[…] Gail Kent, Meta’s global policy director for Messenger, said in December the E2EE debate is far more complicated than the child safety issue that law enforcement makes it out to be, and leaving an encryption back door in products for police to take advantage of would only hamper trust in its messaging products.

Kent said Meta’s E2EE implementation prevents client-side scanning of content, which has been one of the biggest complaints from law enforcement. Kent said even that technology would violate user trust, as it serves as a workaround to intrude on user privacy without compromising encryption – an approach Meta is unwilling to take, according to Kent’s blog post.

As was pointed out during previous attempts to undermine E2EE, not only would an encryption back door (client-side scanning or otherwise) provide an inroad for criminals to access secured information, it wouldn’t stop criminals from finding some other way to send illicit content without the prying eyes of law enforcement able to take a look.

[…]

“We don’t think people want us reading their private messages, so have developed safety measures that prevent, detect and allow us to take action against this heinous abuse, while maintaining online privacy and security,” a Meta spokesperson told us last year. “It’s misleading and inaccurate to say that encryption would have prevented us from identifying and reporting accounts … to the authorities.”

In other words, don’t expect Meta to cave on this one when it can develop a fancy new detection algorithm instead.

Source: Europol asks tech firms, governments to get rid of E2EE • The Register

And every time they come for your freedom whilst quoting child safety – look out.

EDPS warns of EU plans to spy on personal chat messages

This week, during the presentation of the 2023 annual review ( pdf ) , the European privacy supervisor EDPS again warned about European plans to monitor chat messages from European citizens. According to the watchdog, this leads to ‘irreversible surveillance’.

At the beginning of 2022, the European Commission came up with a proposal to inspect all chat messages and other communications from citizens for child abuse. In the case of end-to-end encrypted chat services, this should be done via client-side scanning.

The European Parliament voted against the proposal, but came up with its own proposal.

However, the European member states have not yet taken a joint position.

Already in 2022, the EDPS raised the alarm about the European Commission’s proposal to monitor citizens’ communications. It is seen as a serious risk to the fundamental rights of 450 million Europeans.

Source: EDPS warns of European plans to monitor chat messages – Emerce

Sure, so the EU is not much of a democracy with the European Council (which is where the actual power is) not being elected at all, but that doesn’t mean it has to be a surveillance police state.

US Hospital Websites Almost All Give your Data to 3rd parties, but Many just don’t tell you about it

 In this cross-sectional analysis of a nationally representative sample of 100 nonfederal acute care hospitals, 96.0% of hospital websites transmitted user information to third parties, whereas 71.0% of websites included a publicly accessible privacy policy. Of 71 privacy policies, 40 (56.3%) disclosed specific third-party companies receiving user information.

[…]

Of 100 hospital websites, 96 […] transferred user information to third parties. Privacy policies were found on 71 websites […] 70 […] addressed how collected information would be used, 66 […] addressed categories of third-party recipients of user information, and 40 […] named specific third-party companies or services receiving user information.

[…]

In this cross-sectional study of a nationally representative sample of 100 nonfederal acute care hospitals, we found that although 96.0% of hospital websites exposed users to third-party tracking, only 71.0% of websites had an available website privacy policy. Polices averaged more than 2500 words in length and were written at a college reading-level. Given estimates that more than one-half of adults in the US lack literacy proficiency and that the average patient in the US reads at a grade 8 level, the length and complexity of privacy policies likely pose substantial barriers to users’ ability to read and understand them.27,32

[…]

Only 56.3% of policies (and only 40 hospitals overall) identified specific third-party recipients. Named third-parties tended to be companies familiar to users, such as Google. This lack of detail regarding third-party data recipients may lead users to assume that they are being tracked only by a small number of companies that they know well, when, in fact, hospital websites included in this study transferred user data to a median of 9 domains.

[…]

In addition to presenting risks for users, inadequate privacy policies may pose risks for hospitals. Although hospitals are generally not required under federal law to have a website privacy policy that discloses their methods of collecting and transferring data from website visitors, hospitals that do publish website privacy policies may be subject to enforcement by regulatory authorities like the Federal Trade Commission (FTC).33 The FTC has taken the position that entities that publish privacy policies must ensure that these policies reflect their actual practices.34 For example, entities that promise they will delete personal information upon request but fail to do so in practice may be in violation of the FTC Act.34

[…]

Source: User Information Sharing and Hospital Website Privacy Policies | Ethics | JAMA Network Open | JAMA Network

Dutch investigation into Android smartphones leads to new lawsuit against Google Play Services Constant Surveillance

The Mass Damage & Consumer Foundation today announced that it has initiated a class action lawsuit against Google over its Android operating system. The reason is a new study that shows how Dutch Android smartphones systematically transfer large amounts of information about device use to Google. Even with the most privacy-friendly options enabled, user data cannot be prevented from ending up on Google’s servers. According to the foundation, this is not clear to Android users, let alone whether they have given permission for this.

For the research, a team of scientists purchased several Android phones between 2022 and 2024 and captured, decrypted and analyzed the outgoing traffic on a Dutch server. This shows that a bundle of processes called ‘Google Play Services’ runs silently in the background and cannot be disabled or deleted. These processes continuously record what happens on and around the phone. For example, Google shares which apps someone uses, products they order and even whether users are sleeping.

More than nine million Dutch people

The Mass Damage & Consumer Foundation states that Google’s conduct violates a large number of Dutch and European rules that must protect consumers. The foundation wants to use a lawsuit to force Google to implement fundamental (privacy) changes to the Android platform and to offer an opt-out option for every form of data it collects, not just a few.

[…]

Identity can be easily traced

The research paid specific attention to the use of unique identifiers (UIDs). These are characteristics that Google can link to the collected data, such as an e-mail address or Android ID, a unique serial number with which someone is known to Google. The use of these features is sensitive. For example, Google advises against the use of unique features in its own guidelines for app developers: users could unintentionally be tracked across multiple apps. However, one or more of these unique features were found in the data transmissions examined – without exception. The researchers point out that this makes it easy to trace someone’s identity to virtually everything that happens on and around an Android device.

[…]

Source: Dutch investigation into Android smartphones leads to new lawsuit against Google – Mass Damage & Consumer Foundation

Academics Try to Figure Out Apple’s default apps Privacy Settings and Fail

A study has concluded that Apple’s privacy practices aren’t particularly effective, because default apps on the iPhone and Mac have limited privacy settings and confusing configuration options.

The research was conducted by Amel Bourdoucen and Janne Lindqvist of Aalto University in Finland. The pair noted that while many studies had examined privacy issues with third-party apps for Apple devices, very little literature investigates the issue in first-party apps – like Safari and Siri.

The aims of the study [PDF] were to investigate how much data Apple’s own apps collect and where it’s sent, and to see if users could figure out how to navigate the landscape of Apple’s privacy settings.

[…]

“Our work shows that users may disable default apps, only to discover later that the settings do not match their initial preference,” the paper states.

“Our results demonstrate users are not correctly able to configure the desired privacy settings of default apps. In addition, we discovered that some default app configurations can even reduce trust in family relationships.”

The researchers criticize data collection by Apple apps like Safari and Siri, where that data is sent, how users can (and can’t) disable that data tracking, and how Apple presents privacy options to users.

The paper illustrates these issues in a discussion of Apple’s Siri voice assistant. While users can ostensibly choose not to enable Siri in the initial setup on macOS-powered devices, it still collects data from other apps to provide suggestions. To fully disable Siri, Apple users must find privacy-related options across five different submenus in the Settings app.

Apple’s own documentation for how its privacy settings work isn’t good either. It doesn’t mention every privacy option, explain what is done with user data, or highlight whether settings are enabled or disabled. Also, it’s written in legalese, which almost guarantees no normal user will ever read it.

[…]

The authors also conducted a survey of Apple users and quizzed them on whether they really understood how privacy options worked on iOS and macOS, and what apps were doing with their data.

While the survey was very small – it covered just 15 respondents – the results indicated that Apple’s privacy settings could be hard to navigate.

Eleven of the surveyed users were well aware about data tracking and that it was mostly on by default. However, when informed about how privacy options work in iOS and macOS, nine of the surveyed users were surprised about the scope of data collection.

[…]

Users were also tested on their knowledge of privacy settings for eight default apps – including Siri, Family Sharing, Safari, and iMessage. According to the study, none could confidently figure out how to work their way around the Settings menu to completely disable default apps. When confused, users relied on searching the internet for answers, rather than Apple’s privacy documentation.

[…]

Assuming Apple has any interest in fixing these shortcomings, the team made a few suggestions. Since many users first went to operating system settings instead of app-specific settings when attempting to disable data tracking, a change could assist users. Centralizing these options would also prevent users from getting frustrated and giving up on finding the settings they’re looking for.

Informing users what specific settings do would also be an improvement – many settings are labelled with just a name, but no further details. The researchers suggest replacing Apple’s jargon-filled privacy policy with descriptions that are in the settings menu itself, and maybe even providing some infographic illustrations as well. Anything would be better than legalese.

While this study probably won’t convince Apple to change its ways, lawsuits might have better luck. Apple has been sued multiple times for not transparently disclosing its data tracking. One of the latest suits calls out Apple’s broken promises about privacy, claiming that “Apple does not honor users’ requests to restrict data sharing.”

[…]

Reminder: Apple has a multi-billion-dollar online ads business that it built while strongly criticizing Facebook and others for their privacy practices.

Source: Academics reckon Apple’s default apps have privacy pitfalls • The Register

Roku’s New Idea to Show You Ads When You Pause Your Video Game and spy on the content on your hdmi cable Is Horrifying

[…]

Roku describes its idea in a patent application, which largely flew under the radar when it was filed in November, and was recently spotted by the streaming newsletter Lowpass. In the application, Roku describes a system that’s able to detect when users pause third-party hardware and software and show them ads during that time.

According to the company, its new system works via an HDMI connection. This suggests that it’s designed to target users who play video games or watch content from other streaming services on their Roku TVs. Lowpass described Roku’s conundrum perfectly:

“Roku’s ability to monetize moments when the TV is on but not actively being used goes away when consumers switch to an external device, be it a game console or an attached streaming adapter from a competing manufacturer,” Janko Roettgers, the newsletter’s author, wrote. “Effectively, HDMI inputs have been a bit of a black box for Roku.”

In addition, Roku wouldn’t just show you any old ads. The company states that its innovation can recognize the content that users have paused and deliver customized related ads. Roku’s system would do this by using audio or video-recognition technologies to analyze what the user is watching or analyze the content’s metadata, among other methods.

[…]

In the case of gaming, there’s also the danger of Roku mistaking a long moment of pondering for a pause and sticking an ad right when you’re getting ready to face the final boss. The company is aware of this potential failure and points out that its system will monitor the frames of the content being watched to ensure there was a phase. It also plans on using other methods, such as analyzing the audio feed on the TV for extended moments of silence, to confirm there has been a pause.

[…]

Source: Roku’s New Idea to Show You Ads When You Pause Your Video Game Is Horrifying

Google will delete data collected from private browsing

In hopes of settling a lawsuit challenging its data collection practices, Google has agreed to destroy web browsing data it collected from users browsing in Chrome’s private modes – which weren’t as private as you might have thought.

The lawsuit [PDF], filed in June, 2020, on behalf of plaintiffs Chasom Brown, Maria Nguyen, and William Byatt, sought to hold Google accountable for making misleading statements about privacy.

[…]

“Despite its representations that users are in control of what information Google will track and collect, Google’s various tracking tools, including Google Analytics and Google Ad Manager, are actually designed to automatically track users when they visit webpages – no matter what settings a user chooses,” the complaint claims. “This is true even when a user browses in ‘private browsing mode.'”

Chrome’s Incognito mode only provides privacy in the client by not keeping a locally stored record of the user’s browsing history. It does not shield website visits from Google.

[…]

During the discovery period from September 2020 through March 2022, Google produced more than 5.8 million pages of documents. Even so, it was sanctioned nearly $1 million in 2022 by Magistrate Judge Susan van Keulen – for concealing details about how it can detect when Chrome users employ Incognito mode.

What the plaintiffs’ legal team found might have been difficult to explain at trial.

“Google employees described Chrome Incognito Mode as ‘misleading,’ ‘effectively a lie,’ a ‘confusing mess,’ a ‘problem of professional ethics and basic honesty,’ and as being ‘bad for users, bad for human rights, bad for democracy,'” according to the declaration [PDF] of Mark C Mao, a partner with the law firm of Boies Schiller Flexner LLP, which represents the plaintiffs.

[…]

On December 26 last year the plaintiffs and Google agreed to settle the case. The plaintiffs’ attorneys have suggested the relief provided by the settlement is worth $5 billion – but nothing will be paid, yet.

The settlement covers two classes of people: one of which excludes those using Incognito mode while logged into their Google Account:

  • Class 1: All Chrome browser users with a Google account who accessed a non-Google website containing Google tracking or advertising code using such browser and who were (a) in “Incognito mode” on that browser and (b) were not logged into their Google account on that browser, but whose communications, including identifying information and online browsing history, Google nevertheless intercepted, received, or collected from June 1, 2016 through the present.
  • Class 2: All Safari, Edge, and Internet Explorer users with a Google account who accessed a non-Google website containing Google tracking or advertising code using such browser and who were (a) in a “private browsing mode” on that browser and (b) were not logged into their Google account on that browser, but whose communications, including identifying information and online browsing history, Google nevertheless intercepted, received, or collected from June 1, 2016 through the present.

The settlement [PDF] requires that Google: inform users that it collects private browsing data, both in its Privacy Policy and in an Incognito Splash Screen; “must delete and/or remediate billions of data records that reflect class members’ private browsing activities”; block third-party cookies in Incognito mode for the next five years (separately, Google is phasing out third-party cookies this year); and must delete the browser signals that indicate when private browsing mode is active, to prevent future tracking.

[…]

The class of affected people has been estimated to number about 136 million.

 

Source: Google will delete data collected from private browsing • The Register

The Digital Identity Wallet approved by parliament and council

On the 28th February, The European Parliament gave its final approval to the Digital Identity Regulation, with 335 votes to 190, with 31 abstentions. It was adopted by the EU Council of Ministers on 26th of March. The next step will be its publication in the Official Journal and its entry into force 20 days later.

The regulation introduces the EU Digital Identity Wallet, which will allow citizens to identify and authenticate themselves online to a range of public and private services, as well as store and share digital documents. Wallet users will also be able to create free digital signatures.

The EU Digital Identity Wallet will be used on a voluntary basis, and no one can be discriminated against for not using the wallet. The wallet will be open-source, to further encourage transparency, innovation, and enhance security.

Find out more about the history of the regulation and the project here.

Open-source code and new version of the ARF released for public feedback.

The open-source code of the EU Digital Identity Wallet, and the latest version of the Architecture and Reference Framework (ARF) are now available on our Github.

Version 1.3 of the ARF is now available to the public, to gather feedback before its adoption by the expert group. The ARF outlines how wallets distributed by Member States will function and contains a high level overview of the standards and practices that are needed to build the wallet.

The open-source code of the wallet (also referred to as the reference implementation) is built on the specifications outlined in the ARF. It is based on a modular architecture composed of a set of business agnostic, reusable components which will evolve in incremental steps and can be reused across multiple projects.

[…]

Large Scale Pilot projects are currently test driving the many use cases of the EU Digital Identity Wallet in the real world.

Discover the Large Scale Pilots

Source: The Digital Identity Wallet is now on its way – EU Digital Identity Wallet –

This is an immensely complex project which is very very important to get right. I am very curious if they did.

Soofa Digital Kiosks Snatch Your Phone’s Data When You Walk By, sell it on

Digital kiosks from Soofa seem harmless, giving you bits of information alongside some ads. However, these kiosks popping up throughout the United States take your phone’s information and location data whenever you walk near them, and sell them to local governments and advertisers, first reported by NBC Boston Monday.

“At Soofa, we developed the first pedestrian impressions sensor that measures accurate foot traffic in real-time,” says a page on the company’s website. “Soofa advertisers can check their analytics dashboard anytime to see how their campaigns are tracking towards impressions goals.”

While data tracking is commonplace online, it’s becoming more pervasive in the real world. Whenever you walk past a Soofa kiosk, it collects your phone’s unique identifier (MAC address), manufacturer, and signal strength. This allows it to track anyone who walks within a certain, unspecified range. It then creates a dashboard to share with advertisers and local governments to display analytics about how many people are walking and engaging with its billboards.

This can offer local cities new ways to understand how people use public spaces, and how many people are reading notices posted on these digital kiosks. However, it also gives local governments detailed information on how people move throughout society and raises a question of how this data is being used.

[…]

A Soofa spokesperson said it does not share data with any 3rd parties in an email to Gizmodo, and it only offers the dashboard to an organization that bought the kiosk. The company also claims to anonymize your MAC address by the time it gets to advertisers and local governments.

However, Soofa also tells advertisers how to effectively use your location data on its website. It notes that advertisers can track when you’ve been near a physical billboard or kiosk in the real world based on location data. Then, using cookies, the advertisers can send you more digital ads later on. While Soofa didn’t invent this technique, it certainly seems to be promoting it.

[…]

Source: These Digital Kiosks Snatch Your Phone’s Data When You Walk By

Mass claim CUIC against virus scanner (but really tracking sypware) Avast

Privacy First has teamed up with Austrian NOYB (the organisation of privacy activist Max Schrems) to form the new mass claim organisation CUIC founded. CUIC stands for Consumers United in Court, also pronounceable as ‘CU in Court’ (see you in court).

[…]

Millions spied on by virus scanner

CUIC today filed subpoenas against software company Avast that made virus scanners that illegally collected the browsing behaviour of millions of people on computer, tablet or phone, including in the Netherlands. This data was then resold to other companies through an Avast subsidiary for millions of euros. This included data about users’ health, locations visited, political affiliation, religious beliefs, sexual orientation or economic situation. This information was linked to each specific user through unique user IDs. In a press release articulates CUIC president Wilmar Hendriks today as follows: “People thought they were safe with a virus scanner, but its very creator tracked everything they did on their computers. Avast sold this information to third parties for big money. They even advertised the goldmine of data they had captured. Companies like Avast should not be allowed to get away with this. That is why we are bringing this lawsuit. Those who won’t hear should feel.”

Fines

Back in March 2023, the Czech privacy regulator (UOOU) concluded that Avast violated the AVG and fined the company approximately €13.7 million. The US federal consumer authority, the Federal Trade Commission (FTC), also recently ordered Avast to pay USD16.5 million in compensation to users and ordered it to stop selling or making collected data available to third parties, delete that collected data and implement a comprehensive privacy programme.

The lawsuit for which CUIC today sued Avast should lead to compensation for users in the Netherlands

[…]

Source: Mass claim CUIC against virus scanner Avast launched – Privacy First

Age Verification Laws Drag Us Back to the Dark Ages of the Internet

The fundamental flaw with the age verification bills and laws passing rapidly across the country is the delusional, unfounded belief that putting hurdles between people and pornography is going to actually prevent them from viewing porn. What will happen, and is already happening, is that people–including minors–will go to unmoderated, actively harmful alternatives that don’t require handing over a government-issued ID to see people have sex. Meanwhile, performers and companies that are trying to do the right thing will suffer.

[…]

Source: Age Verification Laws Drag Us Back to the Dark Ages of the Internet

The legislators passing these bills are doing so under the guise of protecting children, but what’s actually happening is a widespread rewiring of the scaffolding of the internet. They ignore long-established legal precedent that has said for years that age verification is unconstitutional, eventually and inevitably reducing everything we see online without impossible privacy hurdles and compromises to that which is not “harmful to minors.” The people who live in these states, including the minors the law is allegedly trying to protect, are worse off because of it. So is the rest of the internet.
Yet new legislation is advancing in Kentucky and Nebraska, while the state of Kansas just passed a law which even requires age-verification for viewing “acts of homosexuality,” according to a report: Websites can be fined up to $10,000 for each instance a minor accesses their content, and parents are allowed to sue for damages of at least $50,000. This means that the state can “require age verification to access LGBTQ content,” according to attorney Alejandra Caraballo, who said on Threads that “Kansas residents may soon need their state IDs” to access material that simply “depicts LGBTQ people.”
One newspaper opinion piece argues there’s an easier solution: don’t buy your children a smartphone: Or we could purchase any of the various software packages that block social media and obscene content from their devices. Or we could allow them to use social media, but limit their screen time. Or we could educate them about the issues that social media causes and simply trust them to make good choices. All of these options would have been denied to us if we lived in a state that passed a strict age verification law. Not only do age verification laws reduce parental freedom, but they also create myriad privacy risks. Requiring platforms to collect government IDs and face scans opens the door to potential exploitation by hackers and enemy governments. The very information intended to protect children could end up in the wrong hands, compromising the privacy and security of millions of users…

Ultimately, age verification laws are a misguided attempt to address the complex issue of underage social media use. Instead of placing undue burdens on users and limiting parental liberty, lawmakers should look for alternative strategies that respect privacy rights while promoting online safety.
This week a trade association for the adult entertainment industry announced plans to petition America’s Supreme Court to intervene.

Source: Slashdot

This is one of the many problems caused by an America that is suddenly so very afraid of sex, death and politics.

Project Ghostbusters: Facebook Accused of Using Your Phone to Wiretap Snapchat, Youtube, Amazon through Onavo VPN

Court filings unsealed last week allege Meta created an internal effort to spy on Snapchat in a secret initiative called “Project Ghostbusters.” Meta did so through Onavo, a Virtual Private Network (VPN) service the company offered between 2016 and 2019 that, ultimately, wasn’t private at all.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted we have no analytics about them,” said Mark Zuckerberg in an email to three Facebook executives in 2016, unsealed in Meta’s antitrust case on Saturday. “It seems important to figure out a new way to get reliable analytics about them… You should figure out how to do this.”

Thus, Project Ghostbusters was born. It’s Meta’s in-house wiretapping tool to spy on data analytics from Snapchat starting in 2016, later used on YouTube and Amazon. This involved creating “kits” that can be installed on iOS and Android devices, to intercept traffic for certain apps, according to the filings. This was described as a “man-in-the-middle” approach to get data on Facebook’s rivals, but users of Onavo were the “men in the middle.”

[…]

A team of senior executives and roughly 41 lawyers worked on Project Ghostbusters, according to court filings. The group was heavily concerned with whether to continue the program in the face of press scrutiny. Facebook ultimately shut down Onavo in 2019 after Apple booted the VPN from its app store.

Prosecutors also allege that Facebook violated the United States Wiretap Act, which prohibits the intentional procurement of another person’s electronic communications.

[…]

Prosecutors allege Project Ghostbusters harmed competition in the ad industry, adding weight to their central argument that Meta is a monopoly in social media.

Source: Project Ghostbusters: Facebook Accused of Using Your Phone to Wiretap Snapchat

Who would have thought that a Facebook VPN was worthless? Oh, I have been reporting on this since 2018