Dashcam App is driving nazi informer wet dream, Sends Video of You Speeding and other infractions Directly to Police

Speed cameras have been around for a long time and so have dash cams. The uniquely devious idea of combining the two into a traffic hall monitor’s dream device was not a potential reality until recently, though. According to the British Royal Automobile Club, such a combination is coming soon. The app, which is reportedly available in the U.K. as soon as May, will allow drivers to report each other directly to the police with video evidence for things like running red lights, failure to use a blinker, distracted driving, and yes, speeding.

Its founder Oleksiy Afonin recently held meetings with police to discuss how it would work. In a nutshell, video evidence of a crime could be uploaded as soon as the driver who captured it stopped their vehicle to do so safely. According to the RAC, the footage could then be “submitted to the police through an official video portal in less than a minute.” Police reportedly were open to the idea of using the videos as evidence in court.

The RAC questioned whether such an app could be distracting. It certainly opens up a whole new world of crime reporting. In some cities, individuals can report poorly or illegally parked cars to traffic police. Drivers getting into the habit of reporting each other for speeding might be a slippery slope, though. The government would be happy to collect the ticket revenue but the number of citations for alleged speeding could be off the charts with such a system. Anybody can download the app and report someone else, but the evidence would need to be reviewed.

The app, called dashcamUK, will only be available in the United Kingdom, as its name indicates. Thankfully, it doesn’t seem like there are any plans to bring it Stateside. Considering the British public is far more open to the use of CCTV cameras in terms of recording crimes than Americans are, it will likely stay that way for that reason, among others.

Source: Strangers Can Send Video of You Speeding Directly to Police With Dashcam App

TSA Confirms Biometric Scanning Soon Won’t Be Optional Even For Domestic Travelers

[…]

In 2017, the DHS began quietly rolling out its facial recognition program, starting with international airports and aimed mainly at collecting/scanning people boarding international flights. Even in its infancy, the DHS was hinting this was never going to remain solely an international affair.

It made its domestic desires official shortly thereafter, with the TSA dropping its domestic surveillance “roadmap” which now included “expanding biometrics to additional domestic travelers.” Then the DHS and TSA ran silent for a bit, resurfacing in late 2022 with the news it was rolling out its facial recognition system at 16 domestic airports.

As of January, the DHS and TSA were still claiming this biometric ID verification system was strictly opt-in. A TSA rep interviewed by the Washington Post, however, hinted that opting out just meant subjecting yourself to the worst in TSA customer service. Given the options, more travelers would obviously prefer a less brusque/hands-y trip through security checkpoints, ensuring healthy participation in the TSA’s “optional” facial recognition program.

A little more than two months have passed, and the TSA is now informing domestic travelers there will soon be no way to opt out of its biometric program. (via Papers Please)

Speaking at an aviation security panel at South by Southwest, TSA Administrator David Pekoske made these comments:

“We’re upgrading our camera systems all the time, upgrading our lighting systems,” Pekoske said. “(We’re) upgrading our algorithms, so that we are using the very most advanced algorithms and technology we possibly can.”

He said passengers can also choose to opt out of certain screening processes if they are uncomfortable, for now. Eventually, biometrics won’t be optional, he said.

[…]

Pekoske buries the problematic aspects of biometric harvesting in exchange for domestic travel “privileges” by claiming this is all about making things better for passengers.

“It’s critically important that this system has as little friction as it possibly can, while we provide for safety and security,” Pekoske said.

Yes, you’ll get through screening a little faster. Unless the AI is wrong, in which case you’ll be dealing with a whole bunch of new problems most agents likely won’t have the expertise to handle.

[…]

More travelers. Fewer agents. And a whole bunch of screens to interact with. That’s the plan for the nation’s airports and everyone who passes through them.

Source: TSA Confirms Biometric Scanning Soon Won’t Be Optional Even For Domestic Travelers | Techdirt

And way more data that hackers can get their hands on and which the government and people who buy the data can use for 1984 type purposes.

SCOPE Europe becomes the accredited monitoring body for a Dutch national data protection code of conduct

[…]SCOPE Europe is now accredited by the Dutch Data Protection Authority as the monitoring body of the Data Pro Code. On this occasion, SCOPE Europe celebrates its success in obtaining its second accreditation and looks forward to continuing its work on fostering trust in the digital economy.

When we were approached by NLdigital, the creators of the Data Pro Code, we knew that taking on the monitoring of a national code of conduct would be an exciting endeavor. As the first-ever accredited monitoring body for a transnational GDPR code of conduct, SCOPE Europe has built unique expertise in the field and are proud, to further apply in the context of another co-regulatory initiative.

The Code puts forward an accessible compliance framework for companies of all sizes, including micro, small and medium enterprises in the Netherlands. With the approval and now the accreditation of its monitoring body, the Data Pro Code will enable data processors to demonstrate GDPR compliance and boost transparency within the digital industry.

Source: PRESS RELEASE: SCOPE Europe becomes the accredited monitoring body for a Dutch national code of conduct: SCOPE Europe bvba/sprl

Anker Eufy security cam ‘stored unique ID’ of everyone filmed in the cloud for other cameras to identify – and for anyone to watch

A lawsuit filed against eufy security cam maker Anker Tech claims the biz assigns “unique identifiers” to the faces of any person who walks in front of its devices – and then stores that data in the cloud, “essentially logging the locations of unsuspecting individuals” when they stroll past.

[…]

All three suits allege Anker falsely represented that its security cameras stored all data locally and did not upload that data to the cloud.

Moore went public with his claims in November last year, alleging video and audio captured by Anker’s eufy security cams could be streamed and watched by any stranger using VLC media player, […]

In a YouTube video, the complaint details, Moore allegedly showed how the “supposedly ‘private,’ ‘stored locally’, ‘transmitted only to you’ doorbell is streaming to the cloud – without cloud storage enabled.”

He claimed the devices were uploading video thumbnails and facial recognition data to Anker’s cloud server, despite his never opting into Anker’s cloud services and said he’d found a separate camera tied to a different account could identify his face with the same unique ID.

The security researcher alleged at the time this showed that Anker was not only storing facial-recog data in the cloud, but also “sharing that back-end information between accounts” lawyers for the two other, near-identical lawsuits claim.

[…]

According to the complaint [PDF], eufy’s security cameras are marketed as “private” and as “local storage only” as a direct alternative to Anker’s competitors that require the use of cloud storage.

Desai’s complaint goes on to claim:

Not only does Anker not keep consumers’ information private, it was further revealed that Anker was uploading facial recognition data and biometrics to its Amazon Web Services cloud without encryption.

In fact, Anker has been storing its customers’ data alongside a specific username and other identifiable information on its AWS cloud servers even when its “eufy” app reflects the data has been deleted. …. Further, even when using a different camera, different username, and even a different HomeBase to “store” the footage locally, Anker is still tagging and linking a user’s facial ID to their picture across its camera platform. Meaning, once recorded on one eufy Security Camera, those same individuals are recognized via their biometrics on other eufy Security Cameras.

In an unrelated incident in 2021, a “software bug” in some of the brand’s 1080p Wi-Fi-connected Eufycams cams sent feeds from some users’ homes to other Eufycam customers, some of whom were in other countries at the time.

[…]

Source: Eufy security cam ‘stored unique ID’ of everyone filmed • The Register

Telehealth startup Cerebral shared millions of patients’ data with advertisers since 2019

Cerebral has revealed it shared the private health information, including mental health assessments, of more than 3.1 million patients in the United States with advertisers and social media giants like Facebook, Google and TikTok.

The telehealth startup, which exploded in popularity during the COVID-19 pandemic after rolling lockdowns and a surge in online-only virtual health services, disclosed the security lapse [This is no security lapse! This is blatant greed served by peddling people’s personal information!] in a filing with the federal government that it shared patients’ personal and health information who used the app to search for therapy or other mental health care services.

Cerebral said that it collected and shared names, phone numbers, email addresses, dates of birth, IP addresses and other demographics, as well as data collected from Cerebral’s online mental health self-assessment, which may have also included the services that the patient selected, assessment responses and other associated health information.

The full disclosure follows:

If an individual created a Cerebral account, the information disclosed may have included name, phone number, email address, date of birth, IP address, Cerebral client ID number, and other demographic or information. If, in addition to creating a Cerebral account, an individual also completed any portion of Cerebral’s online mental health self-assessment, the information disclosed may also have included the service the individual selected, assessment responses, and certain associated health information.

If, in addition to creating a Cerebral account and completing Cerebral’s online mental health self-assessment, an individual also purchased a subscription plan from Cerebral, the information disclosed may also have included subscription plan type, appointment dates and other booking information, treatment, and other clinical information, health insurance/pharmacy benefit information (for example, plan name and group/member numbers), and insurance co-pay amount.

Cerebral was sharing patients’ data with tech giants in real-time by way of trackers and other data-collecting code that the startup embedded within its apps. Tech companies and advertisers, like Google, Facebook and TikTok, allow developers to include snippets of their custom-built code, which allows the developers to share information about their app users’ activity with the tech giants, often under the guise of analytics but also for advertising.

But users often have no idea that they are opting-in to this tracking simply by accepting the app’s terms of use and privacy policies, which many people don’t read.

Cerebral said in its notice to customers — buried at the bottom of its website — that the data collection and sharing has been going on since October 2019 when the startup was founded. The startup said it has removed the tracking code from its apps. While not mentioned, the tech giants are under no obligations to delete the data that Cerebral shared with them.

Because of how Cerebral handles confidential patient data, it’s covered under the U.S. health privacy law known as HIPAA. According to a list of health-related security lapses under investigation by the U.S. Department of Health and Human Services, which oversees and enforces HIPAA, Cerebral’s data lapse is the second-largest breach of health data in 2023.

News of Cerebral’s years-long data lapse comes just weeks after the U.S. Federal Trade Commission slapped GoodRx with a $1.5 million fine and ordered it to stop sharing patients’ health data with advertisers, and BetterHelp was ordered to pay customers $8.5 million for mishandling users’ data.

If you were wondering why startups today should terrify you, Cerebral is just the latest example.

Source: Telehealth startup Cerebral shared millions of patients’ data with advertisers | TechCrunch

 

Signal says it will shut down in UK over Online Safety Bill, which wants to install spyware on all your devices

[…]

The Online Safety Bill contemplates bypassing encryption using device-side scanning to protect children from harmful material, and coincidentally breaking the security of end-to-end encryption at the same time. It’s currently being considered in Parliament and has been the subject of controversy for months.

[ something something saving children – that’s always a bad sign when they trot that one out ]

The legislation contains what critics have called “a spy clause.” [PDF] It requires companies to remove child sexual exploitation and abuse (CSEA) material or terrorist content from online platforms “whether communicated publicly or privately.” As applied to encrypted messaging, that means either encryption must be removed to allow content scanning or scanning must occur prior to encryption.

Signal draws the line

Such schemes have been condemned by technical experts and Signal is similarly unenthusiastic.

“Signal is a nonprofit whose sole mission is to provide a truly private means of digital communication to anyone, anywhere in the world,” said Meredith Whittaker, president of the Signal Foundation, in a statement provided to The Register.

“Many millions of people globally rely on us to provide a safe and secure messaging service to conduct journalism, express dissent, voice intimate or vulnerable thoughts, and otherwise speak to those they want to be heard by without surveillance from tech corporations and governments.”

“We have never, and will never, break our commitment to the people who use and trust Signal. And this means that we would absolutely choose to cease operating in a given region if the alternative meant undermining our privacy commitments to those who rely on us.”

Asked whether she was concerned that Signal could be banned under the Online Safety rules, Whittaker told The Register, “We were responding to a hypothetical, and we’re not going to speculate on probabilities. The language in the bill as it stands is deeply troubling, particularly the mandate for proactive surveillance of all images and texts. If we were given a choice between kneecapping our privacy guarantees by implementing such mass surveillance, or ceasing operations in the UK, we would cease operations.”

[…]

“If Signal withdraws its services from the UK, it will particularly harm journalists, campaigners and activists who rely on end-to-end encryption to communicate safely.”

[…]

 

Source: Signal says it will shut down in UK over Online Safety Bill

Google’s Play Store Privacy Labels Are a ‘Total Failure:’ Study

[…]

“There are two main problems here,” Mozilla’s Caltrider said. “The first problem is Google only requires the information in labels to be self-reported. So, fingers crossed, because it’s the honor system, and it turns out that most labels seem to be misleading.”

Google promises to make apps fix problems it finds in the labels, and threatens to ban apps that don’t get in compliance. But the company has never provided any details about how it polices apps. Google said it’s vigilant about enforcement but didn’t give any details about its enforcement process, and didn’t respond to a question about any enforcement actions it’s taken in the past.

[…]

Of course, Google could just read the privacy policies where apps spell out these practices, like Mozilla did, but there’s a bigger issue at play. These apps may not even be breaking Google’s privacy label rules, because those rules are so relaxed that “they let companies lie,” Caltrider said.

“That’s the second problem. Google’s own rules for what data practices you have to disclose are a joke,” Caltrider said. “The guidelines for the labels make them useless.”

If you go looking at Google’s rules for the data safety labels, which are buried deep in a cascading series of help menus, you’ll learn that there is a long list of things that you don’t have to tell your users about. In other words, you can say you don’t collect data or share it with third parties, while you do in fact collect data and share it with third parties.

For example, apps don’t have to disclose data sharing it if they have “consent” to share the data from users, or if they’re sharing the data with “service providers,” or if the data is “anonymized” (which is nonsense), or if the data is being shared for “specific legal purposes.” There are similar exceptions for what counts as data collection. Those loopholes are so big you could fill up a truck with data and drive it right on through.

[…]

Source: Google’s Play Store Privacy Labels Are a ‘Total Failure:’ Study

Which goes to show again, walled garden app stores really are no better than just downloading stuff from the internet, unless you’re the owner of the walled garden and collect 30% revenue for doing basically not much.

MetaGuard: Going Incognito in the Metaverse

[…]

with numerous recent studies showing the ease at which VR users can be profiled, deanonymized, and data harvested, metaverse platforms carry all the privacy risks of the current internet and more while at present having none of the defensive privacy tools we are accustomed to using on the web. To remedy this, we present the first known method of implementing an “incognito mode” for VR. Our technique leverages local ε-differential privacy to quantifiably obscure sensitive user data attributes, with a focus on intelligently adding noise when and where it is needed most to maximize privacy while minimizing usability impact. Moreover, our system is capable of flexibly adapting to the unique needs of each metaverse application to further optimize this trade-off. We implement our solution as a universal Unity (C#) plugin that we then evaluate using several popular VR applications. Upon faithfully replicating the most well known VR privacy attack studies, we show a significant degradation of attacker capabilities when using our proposed solution.

[…]

Source: MetaGuard: Going Incognito in the Metaverse | Berkeley RDI

3 motion points allow you to be identified within seconds in VR

[..]

In a paper provided to The Register in advance of its publication on ArXiv, academics Vivek Nair, Wenbo Guo, Justus Mattern, Rui Wang, James O’Brien, Louis Rosenberg, and Dawn Song set out to test the extent to which individuals in VR environments can be identified by body movement data.

The boffins gathered telemetry data from more than 55,000 people who played Beat Saber, a VR rhythm game in which players wave hand controllers to music. Then they digested 3.96TB of data, from game leaderboard BeatLeader, consisting of 2,669,886 game replays from 55,541 users during 713,013 separate play sessions.

These Beat Saber Open Replay (BSOR) files contained metadata (devices and game settings), telemetry (measurements of the position and orientation of players’ hands, head, and so on), context info (type, location, and timing of in-game stimuli), and performance stats (responses to in-game stimuli).

From this, the researchers focused on the data derived from the head and hand movements of Beat Saber players. Just five minutes of those three data points proved enough to train a classification model that, given 100 minutes of motion data from the game, could uniquely identify the player 94 percent of the time. And with just 10 seconds of motion data, the classification model managed accuracy of 73 percent.

“The study demonstrates that over 55k ‘anonymous’ VR users can be de-anonymized back to the exact individual just by watching their head and hand movements for a few seconds,” said Vivek Nair, a UC Berkeley doctoral student and one of the authors of the paper, in an email to The Register.

“We have known for a long time that motion reveals information about people, but what this study newly shows is that movement patterns are so unique to an individual that they could serve as an identifying biometric, on par with facial or fingerprint recognition. This really changes how we think about the notion of ‘privacy’ in the metaverse, as just by moving around in VR, you might as well be broadcasting your face or fingerprints at all times!”

[…]

“There have been papers as early as the 1970s which showed that individuals can identify the motion of their friends,” said Nair. “A 2000 paper from Berkeley even showed that with motion capture data, you can recreate a model of a person’s entire skeleton.”

“What hasn’t been shown, until now, is that the motion of just three tracked points in VR (head and hands) is enough to identify users on a huge (and maybe even global) scale. It’s likely true that you can identify and profile users with even greater accuracy outside of VR when more tracked objects are available, such as with full-body tracking that some 3D cameras are able to do.”

[…]

Nair said he remains optimistic about the potential of systems like MetaGuard – a VR incognito mode project he and colleagues have been working on – to address privacy threats by altering VR in a privacy-preserving way rather than trying to prevent data collection.

The paper suggests similar data defense tactics: “We hope to see future works which intelligently corrupt VR replays to obscure identifiable properties without impeding their original purpose (e.g., scoring or cheating detection).”

One reason to prefer data alteration over data denial is that there may be VR applications (e.g., motion-based medical diagnostics) that justify further investment in the technology, as opposed to propping up pretend worlds just for the sake of privacy pillaging.

[…]

Source: How virtual reality telemetry is the next threat to privacy • The Register

Google’s wants Go reporting telemetry data by default

Russ Cox, a Google software engineer steering the development of the open source Go programming language, has presented a possible plan to implement telemetry in the Go toolchain.

However many in the Go community object because the plan calls for telemetry by default.

These alarmed developers would prefer an opt-in rather than an opt-out regime, a position the Go team rejects because it would ensure low adoption and would reduce the amount of telemetry data received to the point it would be of little value.

Cox’s proposal summarized lengthier documentation in three blog posts.

Telemetry, as Cox describes it, involves software sending data from Go software to a server to provide information about which functions are being used and how the software is performing. He argues it is beneficial for open source projects to have that information to guide development.

“I believe that open-source software projects need to explore new telemetry designs that help developers get the information they need to work efficiently and effectively, without collecting invasive traces of detailed user activity,” he wrote.

[…]

Some people believe they have a right to privacy, to be left alone, and to demand that their rights are respected through opt-in consent.

As developer Louis Thibault put it, “The Go dev team seems not to have internalized the principle of affirmative consent in matters of data collection.”

Others, particularly in the ad industry, but in other endeavors as well, see opt-in as an existential threat. They believe that they have a right to gather data and that it’s better to seek forgiveness via opt-out than to ask for permission unlikely to be given via opt-in.

Source: Google’s Go may add telemetry reporting that’s on by default • The Register

Windows 11 Sends Tremendous Amount of User Data to Third Parties – pretty much spyware for loads of people!

Many programs collect user data and send it back to their developers to improve software or provide more targeted services. But according to the PC Security Channel (via Neowin (opens in new tab)) Microsoft’s Windows 11 sends data not only to the Redmond, Washington-based software giant, but also to multiple third parties.

To analyze DNS traffic generated by a freshly installed copy of Windows 11 on a brand-new notebook, the PC Security Channel used the Wireshark network protocol analyzer that reveals precisely what is happening on a network. The results were astounding enough for the YouTube channel to call Microsoft’s Windows 11 “spyware.”

As it turned out, an all-new Windows 11 PC that was never used to browse the Internet contacted not only Windows Update, MSN and Bing servers, but also Steam, McAfee, geo.prod.do, and Comscore ScorecardResearch.com. Apparently, the latest operating system from Microsoft collected and sent telemetry data to various market research companies, advertising services, and the like.

To prove the point, the PC Security Channel tried to find out what Windows XP contacted after a fresh install using the same tool and it turned out that the only things that the 20+ years old operating system contacted were Windows Update and Microsoft Update servers.

“As with any modern operating system, users can expect to see data flowing to help them remain secure, up to date, and keep the system working as anticipated,” a Microsoft spokesperson told Tom’s Hardware. “We are committed to transparency and regularly publish information about the data we collect to empower customers to be more informed about their privacy.”

Some of the claims may be, technically, overblown. Telemetry data is mentioned in Windows’ terms of service, which many people skip over to use the operating system. And you can choose not to enable at least some of this by turning off settings the first time to boot into the OS.

“By accepting this agreement and using the software you agree that Microsoft may collect, use, and disclose the information as described in the Microsoft Privacy Statement (aka.ms/privacy), and as may be described in the user interface associated with the software features,” the terms of service read (opens in new tab). It also points out that some data-sharing settings can be turned off.

Obviously, a lot has changed in 20 years and we now use more online services than back in the early 2000s. As a result, various telemetry data has to be sent online to keep certain features running. But at the very least, Microsoft should do a better job of expressly asking for consent and stating what will be sent and where, because you can’t opt out of all of the data-sharing “features.” The PC Security Channel warns that even when telemetry tracking is disabled by third-party utilities, Windows 11 still sends certain data.

Source: Windows 11 Sends Tremendous Amount of User Data to Third Parties, YouTuber Claims (Update) | Tom’s Hardware

Just when you thought Microsoft was the good guys again and it was all Google, Apple, Amazon, Meta/Facebook being evil they are back at it to prove they still have it!

Microsoft won’t access private data in Office version scan installed as OS update they say

Microsoft wants everyone to know that it isn’t looking to invade their privacy while looking through their Windows PCs to find out-of-date versions of Office software.

In its KB5021751 update last month, Microsoft included a plan to scan Windows systems to smoke out those Office versions that are no longer supported or nearing the end of support. Those include Office 2007 (which saw support end in 2017) and Office 2010 (in 2020) and the 2013 build (this coming April).

The company stressed that it would run only one time and would not install anything on the user’s Windows system, adding that the file for the update is scanned to ensure it’s not infected by malware and is stored on highly secure servers to prevent unauthorized changes to it.

The update caused some discussion among users, at least enough to convince Microsoft to make another pitch that it is respecting user privacy and won’t access private data despite scanning their systems.

The update collects diagnostic and performance data so that it can determine the use of various versions of Office and how to best support and service them, the software maker wrote in an expanded note this week. The update will silently run once to collect the data and no files are left on the user’s systems once the scan is completed.

“This data is gathered from registry entries and APIs,” it wrote. “The update does not gather licensing details, customer content, or data about non-Microsoft products. Microsoft values, protects, and defends privacy.”

[…]

Source: Microsoft won’t access private data in Office version scan • The Register

Of course, just sending data about what version of Office is installed is in fact sending private data about stuff installed on your PC. This is Not OK.

FTC Fines GoodRx $1.5M for Sending Medication Data to Facebook, Google, others

The Federal Trade Commission took historic action against the medication discount service GoodRx Wednesday, issuing a $1.5 million fine against the company for sharing data about users’ prescriptions with Facebook, Google, and others. It’s a move that could usher in a new era of health privacy in the United States.

“Digital health companies and mobile apps should not cash in on consumer’s extremely sensitive and personally identifiable health information,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement.

[…]

In addition to a fine, GoodRx has agreed to a first-of-its-kind provision banning the company from sharing health data with third parties for advertising purposes. That may sound unsurprising, but many consumers don’t realize that health privacy laws generally don’t apply to companies that aren’t affiliated with doctors or insurance companies.

[…]

GoodRx is a health technology company that gives out free coupons for discounts on common medications. The company also connects users with healthcare providers for telehealth visits. GoodRx also shared data about the prescriptions you’re buying and looking up with third-party advertising companies, which incurred the ire of the FTC.

GoodRx’s privacy problems were first uncovered by this reporter in an investigation with Consumer Reports, followed by a similar report in Gizmodo. At the time, if you looked up Viagra, Prozac, PrEP, or any other medication, GoodRx would tell Facebook, Google, and a variety of companies in the ad business, such as Criteo, Branch, and Twilio. GoodRx wasn’t selling the data. Instead, it shared the information so those companies could help GoodRx target its own customers with ads for more drugs.

[…]

Source: FTC Fines GoodRx $1.5M for Sending Medication Data to Facebook

Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator

Meta’s (META.O) WhatsApp subsidiary was fined 5.5 million euros ($5.95 million) on Thursday by Ireland’s Data Privacy Commissioner (DPC), its lead EU privacy regulator, for an additional breach of the bloc’s privacy laws.

The DPC also told WhatsApp to reassess how it uses personal data for service improvements following a similar order it issued this month to Meta’s other main platforms, Facebook and Instagram, which stated Meta must reassess the legal basis upon which it targets advertising through the use of personal data.

[…]

Source: Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator | Reuters

US law enforcement has warrantless access to many money transfers

Your international money transfers might not be as discreet as you think. Senator Ron Wyden and The Wall Street Journal have learned that US law enforcement can access details of money transfers without a warrant through an obscure surveillance program the Arizona attorney general’s office created in 2014. A database stored at a nonprofit, the Transaction Record Analysis Center (TRAC), provides full names and amounts for larger transfers (above $500) sent between the US, Mexico and 22 other regions through services like Western Union, MoneyGram and Viamericas. The program covers data for numerous Caribbean and Latin American countries in addition to Canada, China, France, Malaysia, Spain, Thailand, Ukraine and the US Virgin Islands. Some domestic transfers also enter the data set.

[…]

The concern, of course, is that officials can obtain sensitive transaction details without court oversight or customers’ knowledge. An unscrupulous officer could secretly track large transfers. Wyden adds that the people in the database are more likely to be immigrants, minorities and low-income residents who don’t have bank accounts and already have fewer privacy protectoins. The American Civil Liberties Union also asserts that the subpoenas used to obtain this data violate federal law. Arizona issued at least 140 of these subpoenas between 2014 and 2021.

[…]

Source: US law enforcement has warrantless access to many money transfers | Engadget

Meta sues surveillance company for allegedly scraping more than 600,000 accounts – pots and kettles

Meta has filed a lawsuit against Voyager Labs, which it has accused of creating tens of thousands of fake accounts to scrape data from more than 600,000 Facebook users’ profiles. It says the surveillance company pulled information such as posts, likes, friend lists, photos, and comments, along with other details from groups and pages. Meta claims that Voyager masked its activity using its Surveillance Software, and that the company has also scraped data from Instagram, Twitter, YouTube, LinkedIn and Telegram to sell and license for profit.

In the complaint, which was obtained by Gizmodo, Meta has asked a judge to permanently ban Voyager from Facebook and Instagram. “As a direct result of Defendant’s unlawful actions, Meta has suffered and continues to suffer irreparable harm for which there is no adequate remedy at law, and which will continue unless Defendant’s actions are enjoined,” the filing reads. Meta said Voyager’s actions have caused it “to incur damages, including investigative costs, in an amount to be proven at trial.”

Meta claims that Voyager scraped data from accounts belonging to “employees of non-profit organizations, universities, news media organizations, healthcare facilities, the armed forces of the United States, and local, state, and federal government agencies, as well as full-time parents, retirees, and union members.” The company noted in a blog post it disabled accounts linked to Voyager and that filed the suit to enforce its terms and policies.

[…]

In 2021, The Guardian reported that the Los Angeles Police Department had tested Voyager’s social media surveillance tools in 2019. The company is said to have told the department that police could use the software to track the accounts of a suspect’s friends on social media, and that the system could predict crimes before they took place by making assumptions about a person’s activity.

According to The Guardian, Voyager has suggested factors like Instagram usernames denoting Arab pride or tweeting about Islam could indicate someone is leaning toward extremism. Other companies, such as Palantir, have worked on predictive policing tech. Critics such as the Electronic Frontier Foundation claim that tech can’t predict crime and that algorithms merely perpetuate existing biases.

Data scraping is an issue that Meta has to take seriously. In 2021, it sued an individual for allegedly scraping data on more than 178 million users. Last November, the Irish Data Protection Commission fined the company €265 million ($277 million) for failing to stop bad actors from obtaining millions of people’s phone numbers and other data, which were published elsewhere online. The regulator said Meta failed to comply with GDPR data protection rules.

Source: Meta sues surveillance company for allegedly scraping more than 600,000 accounts | Engadget

Google will pay $9.5 million to settle Washington DC AG’s location-tracking lawsuit

Google has agreed to pay $9.5 million to settle a lawsuit brought by Washington DC Attorney General Karl Racine, who accused the company earlier this year of “deceiving users and invading their privacy.” Google has also agreed to change some of its practices, primarily concerning how it informs users about collecting, storing and using their location data.

“Google leads consumers to believe that consumers are in control of whether Google collects and retains information about their location and how that information is used,” the complaint, which Racine filed in January, read. “In reality, consumers who use Google products cannot prevent Google from collecting, storing and profiting from their location.”

Racine’s office also accused Google of employing “dark patterns,” which are design choices intended to deceive users into carrying out actions that don’t benefit them. Specifically, the AG’s office claimed that Google repeatedly prompted users to switch in location tracking in certain apps and informed them that certain features wouldn’t work properly if location tracking wasn’t on. Racine and his team found that location data wasn’t even needed for the app in question. They asserted that Google made it “impossible for users to opt out of having their location tracked.”

 

The $9.5 million payment is a paltry one for Google. Last quarter, it took parent company Alphabet under 20 minutes to make that much in revenue. The changes that the company will make to its practices as part of the settlement may have a bigger impact.

Folks who currently have certain location settings on will receive notifications telling them how they can disable each setting, delete the associated data and limit how long Google can keep that information. Users who set up a new Google account will be informed which location-related account settings are on by default and offered the chance to opt out.

Google will need to maintain a webpage that details its location data practices and policies. This will include ways for users to access their location settings and details about how each setting impacts Google’s collection, retention or use of location data.

Moreover, Google will be prevented from sharing a person’s precise location data with a third-party advertiser without the user’s explicit consent. The company will need to delete location data “that came from a device or from an IP address in web and app activity within 30 days” of obtaining the information

[…]

Source: Google will pay $9.5 million to settle Washington DC AG’s location-tracking lawsuit | Engadget

Spy Tech Palantir’s Covid-era UK health contract extended without public consultation or competition

NHS England has extended its contract with US spy-tech biz Palantir for the system built at the height of the pandemic to give it time to resolve the twice-delayed procurement of a data platform to support health service reorganization and tackle the massive care backlog.

The contract has already been subject to the threat of a judicial review, after which NHS England – a non-departmental government body – agreed to three concessions, including the promise of public consultation before extending the contract.

Campaigners and legal groups are set to mount legal challenges around separate, but related, NHS dealing with Palantir.

In a notice published yesterday, the NHS England said the contract would be extended until September 2023 in a deal worth £11.5 million ($13.8 million).

NHS England has been conducting a £360 million ($435 million) procurement of a separate, but linked, Federated Data Platform (FDP), a deal said to be a “must-win” for Palantir, a US data management company which cut its teeth working for the CIA and controversial US immigration agency ICE.

The contract notice for FDP, which kicks off the official competition, was originally expected in June 2022 but was delayed until September 2022, when NHS England told The Register it would be published. The notice has yet to appear

[…]

Source: Palantir’s Covid-era UK health contract extended • The Register

Apple Faces French $8.5M Fine For Illegal Data Harvesting

France’s data protection authority, CNIL, fined Apple €8 million (about $8.5 million) Wednesday for illegally harvesting iPhone owners’ data for targeted ads without proper consent.

[…]

The French fine, though, is the latest addition to a growing body of evidence that Apple may not be the privacy guardian angel it makes itself out to be.

[…]

Apple failed to “obtain the consent of French iPhone users (iOS 14.6 version) before depositing and/or writing identifiers used for advertising purposes on their terminals,” the CNIL said in a statement. The CNIL’s fine calls out the search ads in Apple’s App Store, specifically. A French court fined the company over $1 million in December over its commercial practices related to the App Store.

[…]

Eight million euros is peanuts for a company that makes billions a year on advertising alone and is so inconceivably wealthy that it had enough money to lose $1 trillion in market value last year—making Apple the second company in history to do so. The fine could have been higher but for the fact that Apple’s European headquarters are in Ireland, not France, giving the CNIL a smaller target to go after.

Still, its a signal that Apple may face a less friendly regulatory future in Europe. Commercial authorities are investigating Apple for anti-competitive business practices, and are even forcing the company to abandon its proprietary charging cable in favor of USB-C ports.

Source: Apple Faces Rare $8.5M Fine For Illegal Data Harvesting

John Deere signs right to repair agreement

As farming has become more technology-driven, Deere has increasingly injected software into its products with all of its tractors and harvesters now including an autopilot feature as standard.

There is also the John Deere Operations Center, which “instantly captures vital operational data to boost transparency and increase productivity for your business.”

Within a matter of years, the company envisages having 1.5 million machines and half a billion acres of land connected to the cloud service, which will “collect and store crop data, including millions of images of weeds that can be targeted by herbicide.”

Deere also estimates that software fees will make up 10 percent of the company’s revenues by the end of the decade, with Bernstein analysts pegging the average gross margin for farming software at 85 percent, compared to 25 percent for equipment sales.

Just like other commercial software vendors, however, Deere exercises close control and restricts what can be done with its products. This led farm labor advocacy groups to file a complaint to the US Federal Trade Commission last year, claiming that Deere unlawfully refused to provide the software and technical data necessary to repair its machinery.

“Deere is the dominant force in the $68 billion US agricultural equipment market, controlling over 50 per cent of the market for large tractors and combines,” said Fairmark Partners, the groups’ attorneys, in a preface to the complaint [PDF].

“For many farmers and ranchers, they effectively have no choice but to purchase their equipment from Deere. Not satisfied with dominating just the market for equipment, Deere has sought to leverage its power in that market to monopolize the market for repairs of that equipment, to the detriment of farmers, ranchers, and independent repair providers.”

[…]

The MoU, which can be read here [PDF], was signed yesterday at the 2023 AFBF Convention in San Juan, Puerto Rico, and seems to be a commitment by Deere to improve farmers’ access and choice when it comes to repairs.

[…]

Duvall said on a podcast about the matter that the MoU is the result of several years’ work. “As you use equipment, we all know at some point in time, there’s going to be problems with it. And we did have problems with having the opportunity to repair our equipment where we wanted to, or even repair it on the farm,” he added.

“It ensures that our farmers can repair their equipment and have access to the diagnostic tools and product guides so that they can find the problems and find solutions for them. And this is the beginning of a process that we think is going to be real healthy for our farmers and for the company because what it does is it sets up an opportunity for our farmers to really work with John Deere on a personal basis.”

[…]

Source: John Deere signs right to repair agreement • The Register

But… still gives John Deere access to their data for free?

This may also have something to do with the security of John Deere machines being so incredibly piss poor, mainly due to really bad update hygiene

Epic Forced To Pay $520 Million Fine over Fortnite Privacy and Dark Patterns

Fortnite-maker Epic Games has agreed to pay a massive $520 million fine in settlements with the Federal Trade Commission for allegedly illegally gathering data from children and deploying dark patterns techniques to manipulate users into making unwanted in-game purchases. The fines mark a major regulatory win for the Biden administration’s progressive-minded FTC, who, up until now, had largely failed to deliver on its promise of more robust reinforcement of U.S. tech companies.

The first $275 million fine will settle allegations Epic collected personal information from children under the age of 13 without their parent’s consent when they played the hugely popular battle royale game. The FTC claims that unjustified data collection violates the Children’s Online Privacy Protection Act. Internal Epic surveys and the licensing of Fortnite branded toys, the FTC alleges, show Epic clearly knew at least some of its player base was underage. Worse still, the agency claims Epic forced parents to wade through cumbersome barriers when they requested to have their children’s data deleted.

[…]

The game-maker additionally agreed to pay $245 million to refund customers who the FTC says fell victim to manipulative, unfair billing practices that fall under the category, “dark patterns.” Fortnite allegedly deployed a, “counterintuitive, inconsistent, and confusing button configuration,” that led players to incur unwanted charges with a single press of a button. In some cases, the FTC claims that single press button meant users were charged while sitting in a loading screen or while trying to wake the game from sleep mode. Users, the complaint alleges, collectively lost hundreds of millions of dollars to those shady practices. Epic allegedly “ignored more than one million user complaints,” suggesting a high number of users were being wrongly charged.

[…]

And though the FTC’s latest fine is far cry from the $5 billion penalty the agency issued against Facebook in 2019 and represents just a portion of the billions Fortnite reportedly rakes in each year, supporters said it nonetheless represents more than a mere slap on the wrist.

[…]

Source: Epic Forced To Pay Record-Breaking $520 Million Fine

China’s Setting the Standard for Deepfake Regulation

[…]

On January 10, according to The South China Morning Post, China’s Cyberspace Administration will implement new rules that are intended to protect people from having their voice or image digitally impersonated without their consent. The regulators refer to platforms and services using the technology to edit a person’s voice or image as, “deep synthesis providers.”

Those deep synthesis technologies could include the use of deep learning algorithms and augmented reality to generate text, audio, images or video. We’ve already seen numerous instances over the years of these technologies used to impersonate high profile individuals, ranging from celebrities and tech executives to political figures.

Under the new guidelines, companies and technologists who use the technology must first contact and receive the consent from individuals before they edit their voice or image. The rules, officially called The Administrative Provisions on Deep Synthesis for Internet Information Services come in response to governmental concerns that advances in AI tech could be used by bad actors to run scams or defame people by impersonating their identity. In presenting the guidelines, the regulators also acknowledge areas where these technologies could prove useful. Rather than impose a wholesale ban, the regulator says it would actually promote the tech’s legal use and, “provide powerful legal protection to ensure and facilitate,” its development.

But, like many of China’s proposed tech policies, political considerations are inseparable. According to the South China Morning Post, news stories reposted using the technology must come from a government approved list of news outlets. Similarly, the rules require all so-called deep synthesis providers adhere to local laws and maintain “correct political direction and correct public opinion orientation.” Correct here, of course, is determined unilaterally by the state.

Though certain U.S states like New Jersey and Illinois have introduced local privacy legislation that addresses deepfakes, the lack of any meaningful federal privacy laws limits regulators’ abilities to address the tech on a national level. In the private sector, major U.S. platforms like Facebook and Twitter have created new systems meant to detect and flag deepfakes, though they are constantly trying to stay one step ahead of bad actors continually looking for ways to evade those filters.

If China’s new rules are successful, it could lay down a policy framework other nations could build upon and adapt. It wouldn’t be the first time China’s led the pack on strict tech reform. Last year, China introduced sweeping new data privacy laws that radically limited the ways private companies could collect an individual’s personal identity. Those rules were built off of Europe’s General Data Protection Regulation

[…]

That all sounds great, but China’s privacy laws have one glaring loophole tucked within it. Though the law protects people from private companies feeding off their data, it does almost nothing to prevent those same harms being carried out by the government. Similarly, with deepfakes, it’s unclear how the newly proposed regulations would, for instance, prohibit a state-run agency from doctoring or manipulating certain text or audio to influence the narrative around controversial or sensitive political events.

Source: China’s Setting the Standard for Deepfake Regulation

China is also the one setting the bar for anti-monopolistic practices, the EU and US have been caught with their fingers in the jam jar and their pants down.

Telegram is auctioning phone numbers to let users sign up to the service without any SIM

After putting unique usernames on the auction on the TON blockchain, Telegram is now putting anonymous numbers up for bidding. These numbers could be used to sign up for Telegram without needing any SIM card.

Just like the username auction, you can buy these virtual numbers on Fragment, which is a site specially created for Telegram-related auctions. To buy a number, you will have to link your TON wallet (Tonkeeper) to the website.

You can buy a random number for as low as 9 toncoins, which is equivalent to roughly $16.50 at the time of writing. Some of the premium virtual numbers — such as +888-8-888 — are selling for 31,500 toncoins (~$58,200).

Notably, you can only use this number to sign up for Telegram. You can’t use it to receive SMS or calls or use it to register for another service.

For Telegram, this is another way of asking its most loyal supporters to support the app by helping it make some money. The company launched its premium subscription plan earlier this year. On Tuesday, the chat app’s founder Pavel Durov said that Telegram has more than 1 million paid users just a few months after the launch of its premium features. While Telegram offers features like cross-device sync and large groups, it’s important to remember that chats are not protected by end-to-end encryption.

As for folks who want anonymization, Telegram already offers you to hide your phone number. Alternatively, there are tons of virtual phone number services out there — including Google Voice, Hushed, and India-based Doosra — that allow you receive calls and SMS as well.

Source: Telegram is auctioning phone numbers to let users sign up to the service without any SIM

Grad Students Analyze, Hack, and Remove Under-Desk Surveillance Devices Designed to Track Them – at  a privacy institute!

[…]

graduate students at Northeastern University were able to organize and beat back an attempt at introducing invasive surveillance devices that were quietly placed under desks at their school.

Early in October, Senior Vice Provost David Luzzi installed motion sensors under all the desks at the school’s Interdisciplinary Science & Engineering Complex (ISEC), a facility used by graduate students and home to the “Cybersecurity and Privacy Institute” which studies surveillance. These sensors were installed at night—without student knowledge or consent—and when pressed for an explanation, students were told this was part of a study on “desk usage,” according to a blog post by Max von Hippel, a Privacy Institute PhD candidate who wrote about the situation for the Tech Workers Coalition’s newsletter.

[…]

In response, students began to raise concerns about the sensors, and an email was sent out by Luzzi attempting to address issues raised by students.

[…]

“The results will be used to develop best practices for assigning desks and seating within ISEC (and EXP in due course).”

To that end, Luzzi wrote, the university had deployed “a Spaceti occupancy monitoring system” that would use heat sensors at groin level to “aggregate data by subzones to generate when a desk is occupied or not.” Luzzi added that the data would be anonymized, aggregated to look at “themes” and not individual time at assigned desks, not be used in evaluations, and not shared with any supervisors of the students. Following that email, an impromptu listening session was held in the ISEC.

At this first listening session, Luzzi asked that grad student attendees “trust the university since you trust them to give you a degree,” Luzzi also maintained that “we are not doing any science here” as another defense of the decision to not seek IRB approval.

“He just showed up. We’re all working, we have paper deadlines and all sorts of work to do. So he didn’t tell us he was coming, showed up demanding an audience, and a bunch of students spoke with him,”

[…]

After that, the students at the Privacy Institute, which specialize in studying surveillance and reversing its harm, started removing the sensors, hacking into them, and working on an open source guide so other students could do the same. Luzzi had claimed the devices were secure and the data encrypted, but Privacy Institute students learned they were relatively insecure and unencrypted.

[…]

After hacking the devices, students wrote an open letter to Luzzi and university president Joseph E. Aoun asking for the sensors to be removed because they were intimidating, part of a poorly conceived study, and deployed without IRB approval even though human subjects were at the center of the so-called study.

“Resident in ISEC is the Cybersecurity and Privacy Institute, one of the world’s leading groups studying privacy and tracking, with a particular focus on IoT devices,” the letter reads. “To deploy an under-desk tracking system to the very researchers who regularly expose the perils of these technologies is, at best, an extremely poor look for a university that routinely touts these researchers’ accomplishments.

[…]

Another listening session followed, this time for professors only, and where Luzzi claimed the devices were not subject to IRB approval because “they don’t sense humans in particular – they sense any heat source.” More sensors were removed afterwards and put into a “public art piece” in the building lobby spelling out NO!

[…]

Afterwards, von Hippel took to Twitter and shares what becomes a semi-viral thread documenting the entire timeline of events from the secret installation of the sensors to the listening session occurring that day. Hours later, the sensors are removed

[…]

This was a particularly instructive episode because it shows that surveillance need not be permanent—that it can be rooted out by the people affected by it, together.

[…]

“The most powerful tool at the disposal of graduate students is the ability to strike. Fundamentally, the university runs on graduate students.

[…]

“The computer science department was able to organize quickly because almost everybody is a union member, has signed a card, and are all networked together via the union. As soon as this happened, we communicated over union channels.

[…]

This sort of rapid response is key, especially as more and more systems adopt sensors for increasingly spurious or concerning reasons. Sensors have been rolled out at other universities like Carnegie Mellon University, as well as public school systems. They’ve seen use in more militarized and carceral settings such as the US-Mexico border or within America’s prison system.

These rollouts are part of what Cory Doctrow calls the “shitty technology adoption curve” whereby horrible, unethical and immoral technologies are normalized and rationalized by being deployed on vulnerable populations for constantly shifting reasons. You start with people whose concerns can be ignored—migrants, prisoners, homeless populations—then scale it upwards—children in school, contractors, un-unionized workers. By the time it gets to people whose concerns and objections would be the loudest and most integral to its rejection, the technology has already been widely deployed.

[…]

Source: ‘NO’: Grad Students Analyze, Hack, and Remove Under-Desk Surveillance Devices Designed to Track Them

As US, UK Embrace ‘Age Verify Everyone!’ French Data Protection Agency Says Age Verification Is Unreliable And Violates Privacy Rights

[…]

We’ve already spent many, many words explaining how age verification technology is inherently dangerous and actually puts children at greater risk. Not to mention it’s a privacy nightmare that normalizes the idea of mass surveillance, especially for children.

But, why take our word for it?

The French data protection agency, CNIL, has declared that no age verification technology in existence can be deemed as safe and not dangerous to privacy rights.

Now, there are many things that I disagree with CNIL about, especially its views that the censorial “right to be forgotten in the EU” should be applied globally. But one thing we likely agree on is that CNIL does not fuck around when it comes to data protection stuff. CNIL is generally seen as the most aggressive and most thorough in its data protection/data privacy work. Being on the wrong side of CNIL is a dangerous place for any company to be.

So I’d take it seriously when CNIL effectively notes that all age verification is a privacy nightmare, especially for children:

The CNIL has analysed several existing solutions for online age verification, checking whether they have the following properties: sufficiently reliable verification, complete coverage of the population and respect for the protection of individuals’ data and privacy and their security.

The CNIL finds that there is currently no solution that satisfactorily meets these three requirements.

Basically, CNIL found that all existing age verification techniques are unreliable, easily bypassed, and are horrible regarding privacy.

Despite this, CNIL seems oddly optimistic that just by nerding harder, perhaps future solutions will magically work. However, it does go through the weaknesses and problems of the various offerings being pushed today as solutions. For example, you may recall that when I called out the dangers of the age verification in California’s Age Appropriate Design Code, a trade group representing age verification companies reached out to me to let me know there was nothing to worry about, because they’d just scan everyone’s faces to visit websites. CNIL points out some, um, issues with this:

The use of such systems, because of their intrusive aspect (access to the camera on the user’s device during an initial enrolment with a third party, or a one-off verification by the same third party, which may be the source of blackmail via the webcam when accessing a pornographic site is requested), as well as because of the margin of error inherent in any statistical evaluation, should imperatively be conditional upon compliance with operating, reliability and performance standards. Such requirements should be independently verified.

This type of method must also be implemented by a trusted third party respecting precise specifications, particularly concerning access to pornographic sites. Thus, an age estimate performed locally on the user’s terminal should be preferred in order to minimise the risk of data leakage. In the absence of such a framework, this method should not be deployed.

Every other verification technique seems to similarly raise questions about effectiveness and how protective (or, well, how not protective it is of privacy rights).

So… why isn’t this raising alarm bells among the various legislatures and children’s advocates (many of whom also claim to be privacy advocates) who are pushing for these laws?

Source: As US, UK Embrace ‘Age Verify Everyone!’ French Data Protection Agency Says Age Verification Is Unreliable And Violates Privacy Rights | Techdirt