3 motion points allow you to be identified within seconds in VR

[..]

In a paper provided to The Register in advance of its publication on ArXiv, academics Vivek Nair, Wenbo Guo, Justus Mattern, Rui Wang, James O’Brien, Louis Rosenberg, and Dawn Song set out to test the extent to which individuals in VR environments can be identified by body movement data.

The boffins gathered telemetry data from more than 55,000 people who played Beat Saber, a VR rhythm game in which players wave hand controllers to music. Then they digested 3.96TB of data, from game leaderboard BeatLeader, consisting of 2,669,886 game replays from 55,541 users during 713,013 separate play sessions.

These Beat Saber Open Replay (BSOR) files contained metadata (devices and game settings), telemetry (measurements of the position and orientation of players’ hands, head, and so on), context info (type, location, and timing of in-game stimuli), and performance stats (responses to in-game stimuli).

From this, the researchers focused on the data derived from the head and hand movements of Beat Saber players. Just five minutes of those three data points proved enough to train a classification model that, given 100 minutes of motion data from the game, could uniquely identify the player 94 percent of the time. And with just 10 seconds of motion data, the classification model managed accuracy of 73 percent.

“The study demonstrates that over 55k ‘anonymous’ VR users can be de-anonymized back to the exact individual just by watching their head and hand movements for a few seconds,” said Vivek Nair, a UC Berkeley doctoral student and one of the authors of the paper, in an email to The Register.

“We have known for a long time that motion reveals information about people, but what this study newly shows is that movement patterns are so unique to an individual that they could serve as an identifying biometric, on par with facial or fingerprint recognition. This really changes how we think about the notion of ‘privacy’ in the metaverse, as just by moving around in VR, you might as well be broadcasting your face or fingerprints at all times!”

[…]

“There have been papers as early as the 1970s which showed that individuals can identify the motion of their friends,” said Nair. “A 2000 paper from Berkeley even showed that with motion capture data, you can recreate a model of a person’s entire skeleton.”

“What hasn’t been shown, until now, is that the motion of just three tracked points in VR (head and hands) is enough to identify users on a huge (and maybe even global) scale. It’s likely true that you can identify and profile users with even greater accuracy outside of VR when more tracked objects are available, such as with full-body tracking that some 3D cameras are able to do.”

[…]

Nair said he remains optimistic about the potential of systems like MetaGuard – a VR incognito mode project he and colleagues have been working on – to address privacy threats by altering VR in a privacy-preserving way rather than trying to prevent data collection.

The paper suggests similar data defense tactics: “We hope to see future works which intelligently corrupt VR replays to obscure identifiable properties without impeding their original purpose (e.g., scoring or cheating detection).”

One reason to prefer data alteration over data denial is that there may be VR applications (e.g., motion-based medical diagnostics) that justify further investment in the technology, as opposed to propping up pretend worlds just for the sake of privacy pillaging.

[…]

Source: How virtual reality telemetry is the next threat to privacy • The Register

Google’s wants Go reporting telemetry data by default

Russ Cox, a Google software engineer steering the development of the open source Go programming language, has presented a possible plan to implement telemetry in the Go toolchain.

However many in the Go community object because the plan calls for telemetry by default.

These alarmed developers would prefer an opt-in rather than an opt-out regime, a position the Go team rejects because it would ensure low adoption and would reduce the amount of telemetry data received to the point it would be of little value.

Cox’s proposal summarized lengthier documentation in three blog posts.

Telemetry, as Cox describes it, involves software sending data from Go software to a server to provide information about which functions are being used and how the software is performing. He argues it is beneficial for open source projects to have that information to guide development.

“I believe that open-source software projects need to explore new telemetry designs that help developers get the information they need to work efficiently and effectively, without collecting invasive traces of detailed user activity,” he wrote.

[…]

Some people believe they have a right to privacy, to be left alone, and to demand that their rights are respected through opt-in consent.

As developer Louis Thibault put it, “The Go dev team seems not to have internalized the principle of affirmative consent in matters of data collection.”

Others, particularly in the ad industry, but in other endeavors as well, see opt-in as an existential threat. They believe that they have a right to gather data and that it’s better to seek forgiveness via opt-out than to ask for permission unlikely to be given via opt-in.

Source: Google’s Go may add telemetry reporting that’s on by default • The Register

Windows 11 Sends Tremendous Amount of User Data to Third Parties – pretty much spyware for loads of people!

Many programs collect user data and send it back to their developers to improve software or provide more targeted services. But according to the PC Security Channel (via Neowin (opens in new tab)) Microsoft’s Windows 11 sends data not only to the Redmond, Washington-based software giant, but also to multiple third parties.

To analyze DNS traffic generated by a freshly installed copy of Windows 11 on a brand-new notebook, the PC Security Channel used the Wireshark network protocol analyzer that reveals precisely what is happening on a network. The results were astounding enough for the YouTube channel to call Microsoft’s Windows 11 “spyware.”

As it turned out, an all-new Windows 11 PC that was never used to browse the Internet contacted not only Windows Update, MSN and Bing servers, but also Steam, McAfee, geo.prod.do, and Comscore ScorecardResearch.com. Apparently, the latest operating system from Microsoft collected and sent telemetry data to various market research companies, advertising services, and the like.

To prove the point, the PC Security Channel tried to find out what Windows XP contacted after a fresh install using the same tool and it turned out that the only things that the 20+ years old operating system contacted were Windows Update and Microsoft Update servers.

“As with any modern operating system, users can expect to see data flowing to help them remain secure, up to date, and keep the system working as anticipated,” a Microsoft spokesperson told Tom’s Hardware. “We are committed to transparency and regularly publish information about the data we collect to empower customers to be more informed about their privacy.”

Some of the claims may be, technically, overblown. Telemetry data is mentioned in Windows’ terms of service, which many people skip over to use the operating system. And you can choose not to enable at least some of this by turning off settings the first time to boot into the OS.

“By accepting this agreement and using the software you agree that Microsoft may collect, use, and disclose the information as described in the Microsoft Privacy Statement (aka.ms/privacy), and as may be described in the user interface associated with the software features,” the terms of service read (opens in new tab). It also points out that some data-sharing settings can be turned off.

Obviously, a lot has changed in 20 years and we now use more online services than back in the early 2000s. As a result, various telemetry data has to be sent online to keep certain features running. But at the very least, Microsoft should do a better job of expressly asking for consent and stating what will be sent and where, because you can’t opt out of all of the data-sharing “features.” The PC Security Channel warns that even when telemetry tracking is disabled by third-party utilities, Windows 11 still sends certain data.

Source: Windows 11 Sends Tremendous Amount of User Data to Third Parties, YouTuber Claims (Update) | Tom’s Hardware

Just when you thought Microsoft was the good guys again and it was all Google, Apple, Amazon, Meta/Facebook being evil they are back at it to prove they still have it!

Microsoft won’t access private data in Office version scan installed as OS update they say

Microsoft wants everyone to know that it isn’t looking to invade their privacy while looking through their Windows PCs to find out-of-date versions of Office software.

In its KB5021751 update last month, Microsoft included a plan to scan Windows systems to smoke out those Office versions that are no longer supported or nearing the end of support. Those include Office 2007 (which saw support end in 2017) and Office 2010 (in 2020) and the 2013 build (this coming April).

The company stressed that it would run only one time and would not install anything on the user’s Windows system, adding that the file for the update is scanned to ensure it’s not infected by malware and is stored on highly secure servers to prevent unauthorized changes to it.

The update caused some discussion among users, at least enough to convince Microsoft to make another pitch that it is respecting user privacy and won’t access private data despite scanning their systems.

The update collects diagnostic and performance data so that it can determine the use of various versions of Office and how to best support and service them, the software maker wrote in an expanded note this week. The update will silently run once to collect the data and no files are left on the user’s systems once the scan is completed.

“This data is gathered from registry entries and APIs,” it wrote. “The update does not gather licensing details, customer content, or data about non-Microsoft products. Microsoft values, protects, and defends privacy.”

[…]

Source: Microsoft won’t access private data in Office version scan • The Register

Of course, just sending data about what version of Office is installed is in fact sending private data about stuff installed on your PC. This is Not OK.

FTC Fines GoodRx $1.5M for Sending Medication Data to Facebook, Google, others

The Federal Trade Commission took historic action against the medication discount service GoodRx Wednesday, issuing a $1.5 million fine against the company for sharing data about users’ prescriptions with Facebook, Google, and others. It’s a move that could usher in a new era of health privacy in the United States.

“Digital health companies and mobile apps should not cash in on consumer’s extremely sensitive and personally identifiable health information,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement.

[…]

In addition to a fine, GoodRx has agreed to a first-of-its-kind provision banning the company from sharing health data with third parties for advertising purposes. That may sound unsurprising, but many consumers don’t realize that health privacy laws generally don’t apply to companies that aren’t affiliated with doctors or insurance companies.

[…]

GoodRx is a health technology company that gives out free coupons for discounts on common medications. The company also connects users with healthcare providers for telehealth visits. GoodRx also shared data about the prescriptions you’re buying and looking up with third-party advertising companies, which incurred the ire of the FTC.

GoodRx’s privacy problems were first uncovered by this reporter in an investigation with Consumer Reports, followed by a similar report in Gizmodo. At the time, if you looked up Viagra, Prozac, PrEP, or any other medication, GoodRx would tell Facebook, Google, and a variety of companies in the ad business, such as Criteo, Branch, and Twilio. GoodRx wasn’t selling the data. Instead, it shared the information so those companies could help GoodRx target its own customers with ads for more drugs.

[…]

Source: FTC Fines GoodRx $1.5M for Sending Medication Data to Facebook

Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator

Meta’s (META.O) WhatsApp subsidiary was fined 5.5 million euros ($5.95 million) on Thursday by Ireland’s Data Privacy Commissioner (DPC), its lead EU privacy regulator, for an additional breach of the bloc’s privacy laws.

The DPC also told WhatsApp to reassess how it uses personal data for service improvements following a similar order it issued this month to Meta’s other main platforms, Facebook and Instagram, which stated Meta must reassess the legal basis upon which it targets advertising through the use of personal data.

[…]

Source: Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator | Reuters

US law enforcement has warrantless access to many money transfers

Your international money transfers might not be as discreet as you think. Senator Ron Wyden and The Wall Street Journal have learned that US law enforcement can access details of money transfers without a warrant through an obscure surveillance program the Arizona attorney general’s office created in 2014. A database stored at a nonprofit, the Transaction Record Analysis Center (TRAC), provides full names and amounts for larger transfers (above $500) sent between the US, Mexico and 22 other regions through services like Western Union, MoneyGram and Viamericas. The program covers data for numerous Caribbean and Latin American countries in addition to Canada, China, France, Malaysia, Spain, Thailand, Ukraine and the US Virgin Islands. Some domestic transfers also enter the data set.

[…]

The concern, of course, is that officials can obtain sensitive transaction details without court oversight or customers’ knowledge. An unscrupulous officer could secretly track large transfers. Wyden adds that the people in the database are more likely to be immigrants, minorities and low-income residents who don’t have bank accounts and already have fewer privacy protectoins. The American Civil Liberties Union also asserts that the subpoenas used to obtain this data violate federal law. Arizona issued at least 140 of these subpoenas between 2014 and 2021.

[…]

Source: US law enforcement has warrantless access to many money transfers | Engadget

Meta sues surveillance company for allegedly scraping more than 600,000 accounts – pots and kettles

Meta has filed a lawsuit against Voyager Labs, which it has accused of creating tens of thousands of fake accounts to scrape data from more than 600,000 Facebook users’ profiles. It says the surveillance company pulled information such as posts, likes, friend lists, photos, and comments, along with other details from groups and pages. Meta claims that Voyager masked its activity using its Surveillance Software, and that the company has also scraped data from Instagram, Twitter, YouTube, LinkedIn and Telegram to sell and license for profit.

In the complaint, which was obtained by Gizmodo, Meta has asked a judge to permanently ban Voyager from Facebook and Instagram. “As a direct result of Defendant’s unlawful actions, Meta has suffered and continues to suffer irreparable harm for which there is no adequate remedy at law, and which will continue unless Defendant’s actions are enjoined,” the filing reads. Meta said Voyager’s actions have caused it “to incur damages, including investigative costs, in an amount to be proven at trial.”

Meta claims that Voyager scraped data from accounts belonging to “employees of non-profit organizations, universities, news media organizations, healthcare facilities, the armed forces of the United States, and local, state, and federal government agencies, as well as full-time parents, retirees, and union members.” The company noted in a blog post it disabled accounts linked to Voyager and that filed the suit to enforce its terms and policies.

[…]

In 2021, The Guardian reported that the Los Angeles Police Department had tested Voyager’s social media surveillance tools in 2019. The company is said to have told the department that police could use the software to track the accounts of a suspect’s friends on social media, and that the system could predict crimes before they took place by making assumptions about a person’s activity.

According to The Guardian, Voyager has suggested factors like Instagram usernames denoting Arab pride or tweeting about Islam could indicate someone is leaning toward extremism. Other companies, such as Palantir, have worked on predictive policing tech. Critics such as the Electronic Frontier Foundation claim that tech can’t predict crime and that algorithms merely perpetuate existing biases.

Data scraping is an issue that Meta has to take seriously. In 2021, it sued an individual for allegedly scraping data on more than 178 million users. Last November, the Irish Data Protection Commission fined the company €265 million ($277 million) for failing to stop bad actors from obtaining millions of people’s phone numbers and other data, which were published elsewhere online. The regulator said Meta failed to comply with GDPR data protection rules.

Source: Meta sues surveillance company for allegedly scraping more than 600,000 accounts | Engadget

Google will pay $9.5 million to settle Washington DC AG’s location-tracking lawsuit

Google has agreed to pay $9.5 million to settle a lawsuit brought by Washington DC Attorney General Karl Racine, who accused the company earlier this year of “deceiving users and invading their privacy.” Google has also agreed to change some of its practices, primarily concerning how it informs users about collecting, storing and using their location data.

“Google leads consumers to believe that consumers are in control of whether Google collects and retains information about their location and how that information is used,” the complaint, which Racine filed in January, read. “In reality, consumers who use Google products cannot prevent Google from collecting, storing and profiting from their location.”

Racine’s office also accused Google of employing “dark patterns,” which are design choices intended to deceive users into carrying out actions that don’t benefit them. Specifically, the AG’s office claimed that Google repeatedly prompted users to switch in location tracking in certain apps and informed them that certain features wouldn’t work properly if location tracking wasn’t on. Racine and his team found that location data wasn’t even needed for the app in question. They asserted that Google made it “impossible for users to opt out of having their location tracked.”

 

The $9.5 million payment is a paltry one for Google. Last quarter, it took parent company Alphabet under 20 minutes to make that much in revenue. The changes that the company will make to its practices as part of the settlement may have a bigger impact.

Folks who currently have certain location settings on will receive notifications telling them how they can disable each setting, delete the associated data and limit how long Google can keep that information. Users who set up a new Google account will be informed which location-related account settings are on by default and offered the chance to opt out.

Google will need to maintain a webpage that details its location data practices and policies. This will include ways for users to access their location settings and details about how each setting impacts Google’s collection, retention or use of location data.

Moreover, Google will be prevented from sharing a person’s precise location data with a third-party advertiser without the user’s explicit consent. The company will need to delete location data “that came from a device or from an IP address in web and app activity within 30 days” of obtaining the information

[…]

Source: Google will pay $9.5 million to settle Washington DC AG’s location-tracking lawsuit | Engadget

Spy Tech Palantir’s Covid-era UK health contract extended without public consultation or competition

NHS England has extended its contract with US spy-tech biz Palantir for the system built at the height of the pandemic to give it time to resolve the twice-delayed procurement of a data platform to support health service reorganization and tackle the massive care backlog.

The contract has already been subject to the threat of a judicial review, after which NHS England – a non-departmental government body – agreed to three concessions, including the promise of public consultation before extending the contract.

Campaigners and legal groups are set to mount legal challenges around separate, but related, NHS dealing with Palantir.

In a notice published yesterday, the NHS England said the contract would be extended until September 2023 in a deal worth £11.5 million ($13.8 million).

NHS England has been conducting a £360 million ($435 million) procurement of a separate, but linked, Federated Data Platform (FDP), a deal said to be a “must-win” for Palantir, a US data management company which cut its teeth working for the CIA and controversial US immigration agency ICE.

The contract notice for FDP, which kicks off the official competition, was originally expected in June 2022 but was delayed until September 2022, when NHS England told The Register it would be published. The notice has yet to appear

[…]

Source: Palantir’s Covid-era UK health contract extended • The Register

Apple Faces French $8.5M Fine For Illegal Data Harvesting

France’s data protection authority, CNIL, fined Apple €8 million (about $8.5 million) Wednesday for illegally harvesting iPhone owners’ data for targeted ads without proper consent.

[…]

The French fine, though, is the latest addition to a growing body of evidence that Apple may not be the privacy guardian angel it makes itself out to be.

[…]

Apple failed to “obtain the consent of French iPhone users (iOS 14.6 version) before depositing and/or writing identifiers used for advertising purposes on their terminals,” the CNIL said in a statement. The CNIL’s fine calls out the search ads in Apple’s App Store, specifically. A French court fined the company over $1 million in December over its commercial practices related to the App Store.

[…]

Eight million euros is peanuts for a company that makes billions a year on advertising alone and is so inconceivably wealthy that it had enough money to lose $1 trillion in market value last year—making Apple the second company in history to do so. The fine could have been higher but for the fact that Apple’s European headquarters are in Ireland, not France, giving the CNIL a smaller target to go after.

Still, its a signal that Apple may face a less friendly regulatory future in Europe. Commercial authorities are investigating Apple for anti-competitive business practices, and are even forcing the company to abandon its proprietary charging cable in favor of USB-C ports.

Source: Apple Faces Rare $8.5M Fine For Illegal Data Harvesting

John Deere signs right to repair agreement

As farming has become more technology-driven, Deere has increasingly injected software into its products with all of its tractors and harvesters now including an autopilot feature as standard.

There is also the John Deere Operations Center, which “instantly captures vital operational data to boost transparency and increase productivity for your business.”

Within a matter of years, the company envisages having 1.5 million machines and half a billion acres of land connected to the cloud service, which will “collect and store crop data, including millions of images of weeds that can be targeted by herbicide.”

Deere also estimates that software fees will make up 10 percent of the company’s revenues by the end of the decade, with Bernstein analysts pegging the average gross margin for farming software at 85 percent, compared to 25 percent for equipment sales.

Just like other commercial software vendors, however, Deere exercises close control and restricts what can be done with its products. This led farm labor advocacy groups to file a complaint to the US Federal Trade Commission last year, claiming that Deere unlawfully refused to provide the software and technical data necessary to repair its machinery.

“Deere is the dominant force in the $68 billion US agricultural equipment market, controlling over 50 per cent of the market for large tractors and combines,” said Fairmark Partners, the groups’ attorneys, in a preface to the complaint [PDF].

“For many farmers and ranchers, they effectively have no choice but to purchase their equipment from Deere. Not satisfied with dominating just the market for equipment, Deere has sought to leverage its power in that market to monopolize the market for repairs of that equipment, to the detriment of farmers, ranchers, and independent repair providers.”

[…]

The MoU, which can be read here [PDF], was signed yesterday at the 2023 AFBF Convention in San Juan, Puerto Rico, and seems to be a commitment by Deere to improve farmers’ access and choice when it comes to repairs.

[…]

Duvall said on a podcast about the matter that the MoU is the result of several years’ work. “As you use equipment, we all know at some point in time, there’s going to be problems with it. And we did have problems with having the opportunity to repair our equipment where we wanted to, or even repair it on the farm,” he added.

“It ensures that our farmers can repair their equipment and have access to the diagnostic tools and product guides so that they can find the problems and find solutions for them. And this is the beginning of a process that we think is going to be real healthy for our farmers and for the company because what it does is it sets up an opportunity for our farmers to really work with John Deere on a personal basis.”

[…]

Source: John Deere signs right to repair agreement • The Register

But… still gives John Deere access to their data for free?

This may also have something to do with the security of John Deere machines being so incredibly piss poor, mainly due to really bad update hygiene

Epic Forced To Pay $520 Million Fine over Fortnite Privacy and Dark Patterns

Fortnite-maker Epic Games has agreed to pay a massive $520 million fine in settlements with the Federal Trade Commission for allegedly illegally gathering data from children and deploying dark patterns techniques to manipulate users into making unwanted in-game purchases. The fines mark a major regulatory win for the Biden administration’s progressive-minded FTC, who, up until now, had largely failed to deliver on its promise of more robust reinforcement of U.S. tech companies.

The first $275 million fine will settle allegations Epic collected personal information from children under the age of 13 without their parent’s consent when they played the hugely popular battle royale game. The FTC claims that unjustified data collection violates the Children’s Online Privacy Protection Act. Internal Epic surveys and the licensing of Fortnite branded toys, the FTC alleges, show Epic clearly knew at least some of its player base was underage. Worse still, the agency claims Epic forced parents to wade through cumbersome barriers when they requested to have their children’s data deleted.

[…]

The game-maker additionally agreed to pay $245 million to refund customers who the FTC says fell victim to manipulative, unfair billing practices that fall under the category, “dark patterns.” Fortnite allegedly deployed a, “counterintuitive, inconsistent, and confusing button configuration,” that led players to incur unwanted charges with a single press of a button. In some cases, the FTC claims that single press button meant users were charged while sitting in a loading screen or while trying to wake the game from sleep mode. Users, the complaint alleges, collectively lost hundreds of millions of dollars to those shady practices. Epic allegedly “ignored more than one million user complaints,” suggesting a high number of users were being wrongly charged.

[…]

And though the FTC’s latest fine is far cry from the $5 billion penalty the agency issued against Facebook in 2019 and represents just a portion of the billions Fortnite reportedly rakes in each year, supporters said it nonetheless represents more than a mere slap on the wrist.

[…]

Source: Epic Forced To Pay Record-Breaking $520 Million Fine

China’s Setting the Standard for Deepfake Regulation

[…]

On January 10, according to The South China Morning Post, China’s Cyberspace Administration will implement new rules that are intended to protect people from having their voice or image digitally impersonated without their consent. The regulators refer to platforms and services using the technology to edit a person’s voice or image as, “deep synthesis providers.”

Those deep synthesis technologies could include the use of deep learning algorithms and augmented reality to generate text, audio, images or video. We’ve already seen numerous instances over the years of these technologies used to impersonate high profile individuals, ranging from celebrities and tech executives to political figures.

Under the new guidelines, companies and technologists who use the technology must first contact and receive the consent from individuals before they edit their voice or image. The rules, officially called The Administrative Provisions on Deep Synthesis for Internet Information Services come in response to governmental concerns that advances in AI tech could be used by bad actors to run scams or defame people by impersonating their identity. In presenting the guidelines, the regulators also acknowledge areas where these technologies could prove useful. Rather than impose a wholesale ban, the regulator says it would actually promote the tech’s legal use and, “provide powerful legal protection to ensure and facilitate,” its development.

But, like many of China’s proposed tech policies, political considerations are inseparable. According to the South China Morning Post, news stories reposted using the technology must come from a government approved list of news outlets. Similarly, the rules require all so-called deep synthesis providers adhere to local laws and maintain “correct political direction and correct public opinion orientation.” Correct here, of course, is determined unilaterally by the state.

Though certain U.S states like New Jersey and Illinois have introduced local privacy legislation that addresses deepfakes, the lack of any meaningful federal privacy laws limits regulators’ abilities to address the tech on a national level. In the private sector, major U.S. platforms like Facebook and Twitter have created new systems meant to detect and flag deepfakes, though they are constantly trying to stay one step ahead of bad actors continually looking for ways to evade those filters.

If China’s new rules are successful, it could lay down a policy framework other nations could build upon and adapt. It wouldn’t be the first time China’s led the pack on strict tech reform. Last year, China introduced sweeping new data privacy laws that radically limited the ways private companies could collect an individual’s personal identity. Those rules were built off of Europe’s General Data Protection Regulation

[…]

That all sounds great, but China’s privacy laws have one glaring loophole tucked within it. Though the law protects people from private companies feeding off their data, it does almost nothing to prevent those same harms being carried out by the government. Similarly, with deepfakes, it’s unclear how the newly proposed regulations would, for instance, prohibit a state-run agency from doctoring or manipulating certain text or audio to influence the narrative around controversial or sensitive political events.

Source: China’s Setting the Standard for Deepfake Regulation

China is also the one setting the bar for anti-monopolistic practices, the EU and US have been caught with their fingers in the jam jar and their pants down.

Telegram is auctioning phone numbers to let users sign up to the service without any SIM

After putting unique usernames on the auction on the TON blockchain, Telegram is now putting anonymous numbers up for bidding. These numbers could be used to sign up for Telegram without needing any SIM card.

Just like the username auction, you can buy these virtual numbers on Fragment, which is a site specially created for Telegram-related auctions. To buy a number, you will have to link your TON wallet (Tonkeeper) to the website.

You can buy a random number for as low as 9 toncoins, which is equivalent to roughly $16.50 at the time of writing. Some of the premium virtual numbers — such as +888-8-888 — are selling for 31,500 toncoins (~$58,200).

Notably, you can only use this number to sign up for Telegram. You can’t use it to receive SMS or calls or use it to register for another service.

For Telegram, this is another way of asking its most loyal supporters to support the app by helping it make some money. The company launched its premium subscription plan earlier this year. On Tuesday, the chat app’s founder Pavel Durov said that Telegram has more than 1 million paid users just a few months after the launch of its premium features. While Telegram offers features like cross-device sync and large groups, it’s important to remember that chats are not protected by end-to-end encryption.

As for folks who want anonymization, Telegram already offers you to hide your phone number. Alternatively, there are tons of virtual phone number services out there — including Google Voice, Hushed, and India-based Doosra — that allow you receive calls and SMS as well.

Source: Telegram is auctioning phone numbers to let users sign up to the service without any SIM

Grad Students Analyze, Hack, and Remove Under-Desk Surveillance Devices Designed to Track Them – at  a privacy institute!

[…]

graduate students at Northeastern University were able to organize and beat back an attempt at introducing invasive surveillance devices that were quietly placed under desks at their school.

Early in October, Senior Vice Provost David Luzzi installed motion sensors under all the desks at the school’s Interdisciplinary Science & Engineering Complex (ISEC), a facility used by graduate students and home to the “Cybersecurity and Privacy Institute” which studies surveillance. These sensors were installed at night—without student knowledge or consent—and when pressed for an explanation, students were told this was part of a study on “desk usage,” according to a blog post by Max von Hippel, a Privacy Institute PhD candidate who wrote about the situation for the Tech Workers Coalition’s newsletter.

[…]

In response, students began to raise concerns about the sensors, and an email was sent out by Luzzi attempting to address issues raised by students.

[…]

“The results will be used to develop best practices for assigning desks and seating within ISEC (and EXP in due course).”

To that end, Luzzi wrote, the university had deployed “a Spaceti occupancy monitoring system” that would use heat sensors at groin level to “aggregate data by subzones to generate when a desk is occupied or not.” Luzzi added that the data would be anonymized, aggregated to look at “themes” and not individual time at assigned desks, not be used in evaluations, and not shared with any supervisors of the students. Following that email, an impromptu listening session was held in the ISEC.

At this first listening session, Luzzi asked that grad student attendees “trust the university since you trust them to give you a degree,” Luzzi also maintained that “we are not doing any science here” as another defense of the decision to not seek IRB approval.

“He just showed up. We’re all working, we have paper deadlines and all sorts of work to do. So he didn’t tell us he was coming, showed up demanding an audience, and a bunch of students spoke with him,”

[…]

After that, the students at the Privacy Institute, which specialize in studying surveillance and reversing its harm, started removing the sensors, hacking into them, and working on an open source guide so other students could do the same. Luzzi had claimed the devices were secure and the data encrypted, but Privacy Institute students learned they were relatively insecure and unencrypted.

[…]

After hacking the devices, students wrote an open letter to Luzzi and university president Joseph E. Aoun asking for the sensors to be removed because they were intimidating, part of a poorly conceived study, and deployed without IRB approval even though human subjects were at the center of the so-called study.

“Resident in ISEC is the Cybersecurity and Privacy Institute, one of the world’s leading groups studying privacy and tracking, with a particular focus on IoT devices,” the letter reads. “To deploy an under-desk tracking system to the very researchers who regularly expose the perils of these technologies is, at best, an extremely poor look for a university that routinely touts these researchers’ accomplishments.

[…]

Another listening session followed, this time for professors only, and where Luzzi claimed the devices were not subject to IRB approval because “they don’t sense humans in particular – they sense any heat source.” More sensors were removed afterwards and put into a “public art piece” in the building lobby spelling out NO!

[…]

Afterwards, von Hippel took to Twitter and shares what becomes a semi-viral thread documenting the entire timeline of events from the secret installation of the sensors to the listening session occurring that day. Hours later, the sensors are removed

[…]

This was a particularly instructive episode because it shows that surveillance need not be permanent—that it can be rooted out by the people affected by it, together.

[…]

“The most powerful tool at the disposal of graduate students is the ability to strike. Fundamentally, the university runs on graduate students.

[…]

“The computer science department was able to organize quickly because almost everybody is a union member, has signed a card, and are all networked together via the union. As soon as this happened, we communicated over union channels.

[…]

This sort of rapid response is key, especially as more and more systems adopt sensors for increasingly spurious or concerning reasons. Sensors have been rolled out at other universities like Carnegie Mellon University, as well as public school systems. They’ve seen use in more militarized and carceral settings such as the US-Mexico border or within America’s prison system.

These rollouts are part of what Cory Doctrow calls the “shitty technology adoption curve” whereby horrible, unethical and immoral technologies are normalized and rationalized by being deployed on vulnerable populations for constantly shifting reasons. You start with people whose concerns can be ignored—migrants, prisoners, homeless populations—then scale it upwards—children in school, contractors, un-unionized workers. By the time it gets to people whose concerns and objections would be the loudest and most integral to its rejection, the technology has already been widely deployed.

[…]

Source: ‘NO’: Grad Students Analyze, Hack, and Remove Under-Desk Surveillance Devices Designed to Track Them

As US, UK Embrace ‘Age Verify Everyone!’ French Data Protection Agency Says Age Verification Is Unreliable And Violates Privacy Rights

[…]

We’ve already spent many, many words explaining how age verification technology is inherently dangerous and actually puts children at greater risk. Not to mention it’s a privacy nightmare that normalizes the idea of mass surveillance, especially for children.

But, why take our word for it?

The French data protection agency, CNIL, has declared that no age verification technology in existence can be deemed as safe and not dangerous to privacy rights.

Now, there are many things that I disagree with CNIL about, especially its views that the censorial “right to be forgotten in the EU” should be applied globally. But one thing we likely agree on is that CNIL does not fuck around when it comes to data protection stuff. CNIL is generally seen as the most aggressive and most thorough in its data protection/data privacy work. Being on the wrong side of CNIL is a dangerous place for any company to be.

So I’d take it seriously when CNIL effectively notes that all age verification is a privacy nightmare, especially for children:

The CNIL has analysed several existing solutions for online age verification, checking whether they have the following properties: sufficiently reliable verification, complete coverage of the population and respect for the protection of individuals’ data and privacy and their security.

The CNIL finds that there is currently no solution that satisfactorily meets these three requirements.

Basically, CNIL found that all existing age verification techniques are unreliable, easily bypassed, and are horrible regarding privacy.

Despite this, CNIL seems oddly optimistic that just by nerding harder, perhaps future solutions will magically work. However, it does go through the weaknesses and problems of the various offerings being pushed today as solutions. For example, you may recall that when I called out the dangers of the age verification in California’s Age Appropriate Design Code, a trade group representing age verification companies reached out to me to let me know there was nothing to worry about, because they’d just scan everyone’s faces to visit websites. CNIL points out some, um, issues with this:

The use of such systems, because of their intrusive aspect (access to the camera on the user’s device during an initial enrolment with a third party, or a one-off verification by the same third party, which may be the source of blackmail via the webcam when accessing a pornographic site is requested), as well as because of the margin of error inherent in any statistical evaluation, should imperatively be conditional upon compliance with operating, reliability and performance standards. Such requirements should be independently verified.

This type of method must also be implemented by a trusted third party respecting precise specifications, particularly concerning access to pornographic sites. Thus, an age estimate performed locally on the user’s terminal should be preferred in order to minimise the risk of data leakage. In the absence of such a framework, this method should not be deployed.

Every other verification technique seems to similarly raise questions about effectiveness and how protective (or, well, how not protective it is of privacy rights).

So… why isn’t this raising alarm bells among the various legislatures and children’s advocates (many of whom also claim to be privacy advocates) who are pushing for these laws?

Source: As US, UK Embrace ‘Age Verify Everyone!’ French Data Protection Agency Says Age Verification Is Unreliable And Violates Privacy Rights | Techdirt

Telegram shares users’ data in copyright violation lawsuit to Indian court

Telegram has disclosed names of administrators, their phone numbers and IP addresses of channels accused of copyright infringement in compliance with a court order in India in a remarkable illustration of the data the instant messaging platform stores on its users and can be made to disclose by authorities.

The app operator was forced by a Delhi High Court order to share the data after a teacher sued the firm for not doing enough to prevent unauthorised distribution of her course material on the platform. Neetu Singh, the plaintiff teacher, said a number of Telegram channels were re-selling her study materials at discounted prices without permission.

An Indian court earlier had ordered Telegram to adhere to the Indian law and disclose details about those operating such channels.

Telegram unsuccessfully argued that disclosing user information would violate the privacy policy and the laws of Singapore, where it has located its physical servers for storing users’ data. In response, the Indian court said the copyright owners couldn’t be left “completely remediless against the actual infringers” because Telegram has chosen to locate its servers outside the country.

In an order last week, Justice Prathiba Singh said Telegram had complied with the earlier order and shared the data.

“Let copy of the said data be supplied to Id. Counsel for plaintiffs with the clear direction that neither the plaintiffs nor their counsel shall disclose the said data to any third party, except for the purposes of the present proceedings. To this end, disclosure to the governmental authorities/police is permissible,” said the court (PDF) and first reported by LiveLaw.

[…]

Source: Telegram shares users’ data in copyright violation lawsuit | TechCrunch

Eufy Cameras Have Been Uploading Unencrypted Face Footage to Cloud

Eufy, the company behind a series of affordable security cameras I’ve previously suggested over the expensive stuff, is currently in a bit of hot water for its security practices. The company, owned by Anker, purports its products to be one of the few security devices that allow for locally-stored media and don’t need a cloud account to work efficiently. But over the turkey-eating holiday, a noted security researcher across the pond discovered a security hole in Eufy’s mobile app that threatens that whole premise.

Paul Moore relayed the issue in a tweeted screengrab. Moore had purchased the Eufy Doorbell Dual Camera for its promise of a local storage option, only to discover that the doorbell’s cameras had been storing thumbnails of faces on the cloud, along with identifiable user information, despite Moore not even having a Eufy Cloud Storage account.

After Moore tweeted the findings, another user found that the data uploaded to Eufy wasn’t even encrypted. Any uploaded clips could be easily played back on any desktop media player, which Moore later demonstrated. What’s more: thumbnails and clips were linked to their partner cameras, offering additional identifiable information to any digital snoopers sniffing around.

Android Central was able to recreate the issue on its own with a EufyCam 3. It then reached out to Eufy, which explained to the site why this issue was cropping up. If you choose to have a motion notification pushed out with an attached thumbnail, Eufy temporarily uploads that file to its AWS servers to send it out.

[…]

Unfortunately, this isn’t the first time Eufy has had an issue regarding security on its cameras. Last year, the company faced similar reports of “unwarranted access” to random camera feeds, though the company quickly fixed the issue once it was discovered. Eufy is no stranger to patching things up.

Source: Eufy Cameras Have Been Uploading Unencrypted Footage to Cloud

Why first upload these images to AWS instead of directly mailing them?!

Google Settles 40 States’ Location Data Suit for only $392 Million

Google agreed to a $391.5 million dollar settlement on Monday to end a lawsuit accusing the tech giant of tricking users with location data privacy settings that didn’t actually turn off data collection. The payout, the result of a suit brought by 40 state attorneys general, marks one of the biggest privacy settlements in history. Google also promised to make additional changes to clarify its location tracking practices next year.

“For years Google has prioritized profit over their users’ privacy,” said Ellen Rosenblum, Oregon’s attorney general who co-lead the case, in a press release. “They have been crafty and deceptive. Consumers thought they had turned off their location tracking features on Google, but the company continued to secretly record their movements and used that information for advertisers.”

[…]

The attorneys’ investigation into Google and subsequent lawsuit came after a 2018 report that found Google’s Location History setting didn’t stop the company’s location tracking, even though the setting promised that “with Location History off, the places you go are no longer stored.” Google quickly updated the description of its settings, clarifying that you actually have to turn off a completely different setting called Web & App Activity if you want the company to stop following you around.

[…]

Despite waves of legal and media attention, Google’s location settings are still confusing, according to experts in interface design. The fine print makes it clear that you need to change multiple settings if you don’t want Google collecting data about everywhere you go, but you have to read carefully. It remains to be seen how clearly the changes the company promised in the settlement will communicate its data practices.

[…]

 

Source: Google Settles 40 States’ Location Data Suit for $392 Million

Apple Apps Track You Even With Privacy Protections on – and they hoover a LOT

For all of Apple’s talk about how private your iPhone is, the company vacuums up a lot of data about you. iPhones do have a privacy setting that is supposed to turn off that tracking. According to a new report by independent researchers, though, Apple collects extremely detailed information on you with its own apps even when you turn off tracking, an apparent direct contradiction of Apple’s own description of how the privacy protection works.

The iPhone Analytics setting makes an explicit promise. Turn it off, and Apple says that it will “disable the sharing of Device Analytics altogether.” However, Tommy Mysk and Talal Haj Bakry, two app developers and security researchers at the software company Mysk, took a look at the data collected by a number of Apple iPhone apps—the App Store, Apple Music, Apple TV, Books, and Stocks. They found the analytics control and other privacy settings had no obvious effect on Apple’s data collection—the tracking remained the same whether iPhone Analytics was switched on or off.

[…]

The App Store appeared to harvest information about every single thing you did in real time, including what you tapped on, which apps you search for, what ads you saw, and how long you looked at a given app and how you found it. The app sent details about you and your device as well, including ID numbers, what kind of phone you’re using, your screen resolution, your keyboard languages, how you’re connected to the internet—notably, the kind of information commonly used for device fingerprinting.

“Opting-out or switching the personalization options off did not reduce the amount of detailed analytics that the app was sending,” Mysk said. “I switched all the possible options off, namely personalized ads, personalized recommendations, and sharing usage data and analytics.”

[…]

Most of the apps that sent analytics data shared consistent ID numbers, which would allow Apple to track your activity across its services, the researchers found.

[…]

In the App Store, for example, the fact that you’re looking at apps related to mental health, addiction, sexual orientation, and religion can reveal things that you might not want sent to corporate servers.

It’s impossible to know what Apple is doing with the data without the company’s own explanation, and as is so often the case, Apple has been silent so far

[…]

You can see what the data looks like for yourself in the video Mysk posted to Twitter, documenting the information collected by the App Store:

The App Store on your iPhone is watching your every move

This isn’t an every-app-is-tracking-me-so-what’s-one-more situation. These findings are out of line with standard industry practices, Mysk says. He and his research partner ran similar tests in the past looking at analytics in Google Chrome and Microsoft Edge. In both of those apps, Mysk says the data isn’t sent when analytics settings are turned off.

[…]

Source: Apple Apps Track You Even With Privacy Protections on: Report

Senator Wyden Asks State Dept. To Explain Why It’s Handing Out ‘Unfettered’ Access To Americans’ Passport Data

[…]

In 2018, a blockbuster report detailed the actions of CBP agent Jeffrey Rambo. Rambo apparently took it upon himself to track down whistleblowers and leakers. To do this, he cozied up to a journalist and leveraged the wealth of data on travelers collected by federal agencies in hopes of sniffing out sources.

A few years later, another report delved deeper into the CPB and Rambo’s actions. This reporting — referencing a still-redacted DHS Inspector General’s report — showed the CBP routinely tracked journalists (as well as activists and immigration lawyers) via a national counter-terrorism database. This database was apparently routinely queried for reasons unrelated to national security objectives and the information obtained was used to open investigations targeting journalists.

That report remains redacted nearly a year later. But Senator Ron Wyden is demanding answers from the State Department about its far too cozy relationship with other federal agencies, including the CBP.

The State Department is giving law enforcement and intelligence agencies unrestricted access to the personal data of more than 145 million Americans, through information from passport applications that is shared without legal process or any apparent oversight, according to a letter sent from Sen. Ron Wyden to Secretary of State Antony Blinken and obtained by Yahoo News.

The information was uncovered by Wyden during his ongoing probe into reporting by Yahoo News about Operation Whistle Pig, a wide-ranging leak investigation launched by a Border Patrol agent and his supervisors at the U.S. Customs and Border Protection’s National Targeting Center.

On Wednesday, Wyden sent a letter to Blinken requesting detailed information on which federal agencies are provided access to State Department passport information on U.S. citizens.

The letter [PDF] from Wyden points out that the State Department is giving “unfettered” access to at least 25 federal agencies, including DHS components like the CBP. The OIG report into “Operation Whistle Pig” (the one that remains redacted) details Agent Rambo’s actions. Subsequent briefings by State Department officials provided more details that are cited in Wyden’s letter.

More than 25 agencies, but the State Department has, so far refused to identify them.

Department officials declined to identify the specific agencies, but said that both law enforcement and intelligence agencies can access the [passport application] database. They further stated that, while the Department is not legally required to provide other agencies with such access, the Department has done so without requiring these other agencies to obtain compulsory legal process, such as a subpoena or court order.

Sharing is caring, the State Department believes. However, it cannot explain why it feels this passport application database should be an open book to whatever government agencies seek access to it. This is unacceptable, says Senator Wyden. Citing the “clear abuses” by CBP personnel detailed in the Inspector General’s report, Wyden is demanding details the State Department has so far refused to provide, like which agencies have access and the number of times these agencies have accessed the Department’s database.

Why? Because rights matter, no matter what the State Department and its beneficiaries might think.

The Department’s mission does include providing dozens of other government agencies with self-service access to 145 million American’s personal data. The Department has voluntarily taken on this role, and in doing so, prioritized the interests of other agencies over those of law-abiding Americans

That’s the anger on behalf of millions expressed by Senator Wyden. There are also demands. Wyden not only wants answers, he wants changes. He has instructed the State Department to put policies in place to ensure the abuses seen in “Operation Whistle Pig” do not reoccur. He also says the Department should notify Americans when their passport application info is accessed or handed over to government agencies. Finally, he instructs the Department to provide annual statistics on outside agency access to the database, so Americans can better understand who’s going after their data.

So, answers and changes, things federal agencies rarely enjoy engaging with. The answers are likely to be long in coming. The requested changes, even more so. But at least this drags the State Department’s dirty laundry out into the daylight, which makes it a bit more difficult for the Department to continue to ignore a problem it hasn’t addressed for more than three years.

Source: Senator Wyden Asks State Dept. To Explain Why It’s Handing Out ‘Unfettered’ Access To Americans’ Passport Data | Techdirt

Dutch foundation launches mass privacy claim against Twitter – DutchNews.nl

A Dutch foundation is planning to take legal action against social media platform Twitter for illegally collecting and trading in personal details gathered via free apps such as Duolingo and Wordfeud as well as dating apps and weather forecaster Buienradar. Twitter owned advertising platform MoPub between 2013 and January 2022 and that is where the problem lies, the SDBN foundation says. It estimates 11 million people’s information may have been illegally gathered and sold. Between 2013 and 2021, MoPub had access to information gleaned via 30,000 free apps on smartphones and tablets, the foundation says. In essence, the foundation says, consumers ‘paid with their privacy’ without giving permission.

The foundation is demanding compensation on behalf of the apps’ users and if Twitter refuses to pay, the foundation will start a legal case against the company.

Source: Dutch foundation launches mass privacy claim against Twitter – DutchNews.nl

Also Shazam was busy with this – that’s an Apple company. It’s pretty disturbing that this kind of news isn’t a surprise at all any more.

But who is SDBN to collect for Dutch people? I don’t recall them starting up a class action for people to subscribe to and I doubt they will be dividing the money out to the Dutch people either.

Greece To Ban Sale of Spyware After Government Is Accused of Surveillance of opposition party leader

Prime Minister Kyriakos Mitsotakis has announced that Greece would ban the sale of spyware, after his government was accused in a news report of targeting dozens of prominent politicians, journalists and businessmen for surveillance, and the judicial authorities began an investigation. From a report: The announcement is the latest chapter in a scandal that erupted over the summer, when Mr. Mitsotakis conceded that Greece’s state intelligence service had been monitoring an opposition party leader with a traditional wiretap last year. That revelation came after the politician discovered that he had also been targeted with a spyware program known as Predator.

The Greek government said the wiretap was legal but never specified the reasons for it, and Mr. Mitsotakis said it was done without his knowledge. The government has also asserted that it does not own or use the Predator spyware, and has insisted that the simultaneous targeting with a wiretap and Predator was a coincidence.

Source: Greece To Ban Sale of Spyware After Government Is Accused of Surveillance – Slashdot

Texas sues Google for allegedly capturing biometric data of millions without consent

Texas has filed a lawsuit against Alphabet’s (GOOGL.O) Google for allegedly collecting biometric data of millions of Texans without obtaining proper consent, the attorney general’s office said in a statement on Thursday.

The complaint says that companies operating in Texas have been barred for more than a decade from collecting people’s faces, voices or other biometric data without advanced, informed consent.

“In blatant defiance of that law, Google has, since at least 2015, collected biometric data from innumerable Texans and used their faces and their voices to serve Google’s commercial ends,” the complaint said. “Indeed, all across the state, everyday Texans have become unwitting cash cows being milked by Google for profits.”

The collection occurred through products like Google Photos, Google Assistant, and Nest Hub Max, the statement said.

[…]

Source: Texas sues Google for allegedly capturing biometric data of millions without consent | Reuters