The Linkielist

Linking ideas with the world

The Linkielist

HBO hit with class action lawsuit for allegedly sharing subscriber data with Facebook

HBO is facing a class action lawsuit over allegations that it gave subscribers’ viewing history to Facebook without proper permission, Variety has reported. The suit accuses HBO of providing Facebook with customer lists, allowing the social network to match viewing habits with their profiles.

It further alleges that HBO knows Facebook can combine the data because HBO is a major Facebook advertiser — and Facebook can then use that information to retarget ads to its subscribers. Since HBO never received proper customer consent to do this, it allegedly violated the 1988 Video Privacy Protection Act (VPPA), according to the lawsuit.

HBO, like other sites, discloses to users that it (and partners) use cookies to deliver personalized ads. However, the VPPA requires separate consent from users to share their video viewing history. “A standard privacy policy will not suffice,” according to the suit.

Other streaming providers have been hit with similar claims, and TikTok recently agreed to pay a $92 million settlement for (in part) violating the VPPA. In another case, however, a judge ruled in 2015 that Hulu didn’t knowingly share data with Facebook that could establish an individual’s viewing history. The law firm involved in the HBO suit previously won a $50 million settlement with Hearst after alleging that it violated Michigan privacy laws by selling subscriber data.

Source: HBO hit with class action lawsuit for allegedly sharing subscriber data with Facebook | Engadget

Italy slaps creepy webscraping facial recognition firm Clearview AI with €20 million fine

Italy’s data privacy watchdog said it will fine the controversial facial recognition firm Clearview AI for breaching EU law. An investigation by Garante, Italy’s data protection authority, found that the company’s database of 10 billion images of faces includes those of Italians and residents in Italy. The New York City-based firm is being fined €20 million, and will also have to delete any facial biometrics it holds of Italian nationals.

This isn’t the first time that the beleaguered facial recognition tech company is facing legal consequences. The UK data protection authority last November fined the company £17 million after finding its practices—which include collecting selfies of people without their consent from security camera footage or mugshots—violate the nation’s data protection laws. The company has also been banned in Sweden, France and Australia.

The accumulated fines will be a considerable blow for the now five-year old company, completely wiping away the $30 million it raised in its last funding round. But Clearview AI appears to be just getting started. The company is on track to patent its biometric database, which scans faces across public internet data and has been used by law enforcement agencies around the world, including police departments in the United States and a number of federal agencies. A number of Democrats have urged federal agencies to drop their contracts with Clearview AI, claiming that the tool is a severe threat to the privacy of everyday citizens. In a letter to the Department of Homeland Security, Sens. Ed Markey and Jeff Merkley and Reps. Pramila Jayapal and Ayanna Pressley urged regulators to discontinue their use of the tool.

“Clearview AI reportedly scrapes billions of photos from social media sites without permission from or notice to the pictured individuals. In conjunction with the company’s facial recognition capabilities, this trove of personal information is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified,” wrote the authors of the letter.

Despite losing troves of facial recognition data from entire countries, Clearview AI has a plan to rapidly expand this year. The company told investors that it is on track to have 100 billion photos of faces in its database within a year, reported The Washington Post. In its pitch deck, the company said it hopes to secure an additional $50 million from investors to build even more facial recognition tools and ramp up its lobbying efforts.

Source: Italy slaps facial recognition firm Clearview AI with €20 million fine | Engadget

UK Online Safety Bill to require more data to use social media – eg send them your passport

The country’s forthcoming Online Safety Bill will require citizens to hand over even more personal data to largely foreign-headquartered social media platforms, government minister Nadine Dorries has declared.

“The vast majority of social networks used in the UK do not require people to share any personal details about themselves – they are able to identify themselves by a nickname, alias or other term not linked to a legal identity,” said Dorries, Secretary of State for Digital, Culture, Media and Sport (DCMS).

Another legal duty to be imposed on social media platforms will be a requirement to give users a “block” button, something that has been part of most of today’s platforms since their launch.

“When it comes to verifying identities,” said DCMS in a statement, “some platforms may choose to provide users with an option to verify their profile picture to ensure it is a true likeness. Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify.”

“Alternatively,” continued the statement, “verification could include people using a government-issued ID such as a passport to create or update an account.”

Two-factor authentication is a login technology to prevent account hijacking by malicious people, not a method of verifying a user’s government-approved identity.

“People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms,” said Dorries.

Social networks offering services to Britons don’t currently require lots of personal data to register as a user. Most people see this as a benefit; the government seems to see it as a negative.

Today’s statement had led to widespread concerns that DCMS will place UK residents at greater risk of online identity theft or of falling victim to a data breach.

The Online Safety Bill was renamed from the Online Harms Bill shortly before its formal introduction to Parliament. Widely accepted as a disaster in the making by the technically literate, critics have said the bill risks creating an “algorithm-driven censorship future” through new regulations that would make it legally risky for platforms not to proactively censor users’ posts.

It is also closely linked to strong rhetoric discouraging end-to-end encryption rollouts for the sake of “minors”, and its requirements would mean that tech platforms attempting to comply would have to weaken security measures.

Parliamentary efforts at properly scrutinising the draft bill then led to the “scrutineers” instead publishing a manifesto asking for even more stronger legal weapons be included.

[…]

Source: Online Safety Bill to require more data to use social media

EU Data Watchdog Calls for Total Ban of Pegasus Spyware

Israeli authorities say it should be probed and U.S. authorities are calling for it to be sanctioned, but EU officials have a different idea for how to handle Pegasus spyware: just ban that shit entirely.

That’s the main takeaway from a new memo released by EPDS, the Union’s dedicated data watchdog on Tuesday, noting that a full-on ban across the entire region is the only appropriate response to the “unprecedented risks” the tech poses—not only to people’s devices but “to democracy and the rule of law.”

“As the specific technical characteristics of spyware tools like Pegasus make control over their use very difficult, we have to rethink the entire existing system of safeguards established to protect our fundamental rights and freedoms,” the report reads. “Pegasus constitutes a paradigm shift in terms of access to private communications and devices. This fact makes its use incompatible with our democratic values.”

A “paradigm shift” is a good way to describe the tool, which has been used to target a mounting number of civic actors, activists, and political figures from around the globe, including some notable figures from inside the EU. This past summer, local outlets reported that French president Emmanuel Macron surfaced among the list of potential targets that foreign actors had planned to target with the software, and later reports revealed traces of the tech appearing on phones from Macron’s current staffers. Officials from other EU member states like Hungary and Spain have also reported the tech on their devices, and Poland became the latest member to join the list last month when a team of researchers found the spyware being used to surveil three outspoken critics of the Polish government.

[…]

Source: EU Data Watchdog Calls for Total Ban of Pegasus Spyware

Is Microsoft Stealing People’s Bookmarks, passwords, ID / passport numbers without consent?

received email from two people who told me that Microsoft Edge enabled synching without warning or consent, which means that Microsoft sucked up all of their bookmarks. Of course they can turn synching off, but it’s too late.

Has this happened to anyone else, or was this user error of some sort? If this is real, can some reporter write about it?

(Not that “user error” is a good justification. Any system where making a simple mistake means that you’ve forever lost your privacy isn’t a good one. We see this same situation with sharing contact lists with apps on smartphones. Apps will repeatedly ask, and only need you to accidentally click “okay” once.)

EDITED TO ADD: It’s actually worse than I thought. Edge urges users to store passwords, ID numbers, and even passport numbers, all of which get uploaded to Microsoft by default when synch is enabled.

Source: Is Microsoft Stealing People’s Bookmarks? – Schneier on Security

Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did

Crisis Text Line, one of the nation’s largest nonprofit support options for the suicidal, is in some hot water. A Politico report last week highlighted how the company has been caught collecting and monetizing the data of callers… to create and market customer service software. More specifically, Crisis Text Line says it “anonymizes” some user and interaction data (ranging from the frequency certain words are used, to the type of distress users are experiencing) and sells it to a for-profit partner named Loris.ai. Crisis Text Line has a minority stake in Loris.ai, and gets a cut of their revenues in exchange.

As we’ve seen in countless privacy scandals before this one, the idea that this data is “anonymized” is once again held up as some kind of get out of jail free card:

“Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable.”

But as we’ve noted more times than I can count, “anonymized” is effectively a meaningless term in the privacy realm. Study after study after study has shown that it’s relatively trivial to identify a user’s “anonymized” footprint when that data is combined with a variety of other datasets. For a long time the press couldn’t be bothered to point this out, something that’s thankfully starting to change.

[…]

Source: Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did | Techdirt

Google adds new opt out tracking for Workspace Customers

[…]

according to a new FAQ posted on Google’s Workplace administrator forum. At the end of that month, the company will be adding a new feature—“Workspace search history”—that can continue to track these customers, even if they, or their admins, turn activity tracking off.

The worst part? Unlike Google’s activity trackers that are politely defaulted to “off” for all users, this new Workplace-specific feature will be defaulted to “on,” across Workspace apps like Gmail, Google Drive, Google Meet, and more.

[…]

Luckily, they can turn this option off if they want to, the same way they could turn off activity settings until now. According to Google, the option to do so will be right on the “My Activity” page once the feature goes live, right alongside the current options to flip off Google’s ability to keep tabs on their web activity, location history, and YouTube history. On this page, Google says the option to turn off Workspace history will be located on the far lefthand side, under the “Other Google Activity” tab.

[…]

Source: Google Makes Opting Out Harder for Workspace Customers

LG Announces New Ad Targeting Features for TVs – wait, wtf, I bought my TV, not a service!

[… ]

there are plenty of cases where you throw down hundreds of dollars for a piece of hardware and then you end up being the product anyway. Case in point: TVs.

On Wednesday, the television giant LG announced a new offering to advertisers that promises to be able to reach the company’s millions of connected devices in households across the country, pummeling TV viewers with—you guessed it—targeted ads. While ads playing on your connected TV might not be anything new, some of the metrics the company plans to hand over to advertisers include targeting viewers by specific demographics, for example, or being able to tie a TV ad view to someone’s in-store purchase down the line.

If you swap out a TV screen for a computer screen, the kind of microtargeting that LG’s offering doesn’t sound any different than what a company like Facebook or Google would offer. That’s kind of the point.

[…]

Aside from being an eyesore that literally no TV user wants, these ads come bundled with their own privacy issues, too. While the kinds of invasive tracking and targeting that regularly happens with the ads on your Facebook feed or Google search results are built off of more than a decade’s worth of infrastructure, those in the connected television (or so-called “CTV”) space are clearly catching up, and catching up fast. Aside from what LG’s offering, there are other players in adtech right now that offer ways to connect your in-app activity to what you watch on TV, or the billboards you walk by with what you watch on TV. For whatever reason, this sort of tech largely sidesteps the kinds of privacy snafus that regulators are trying to wrap their heads around right now—regulations like CPRA and GDPR are largely designed to handle your data is handled on the web, not on TV.

[…]

The good news is that you have some sort of refuge from this ad-ridden hell, though it does take a few extra steps. If you own a smart TV, you can simply not connect it to the internet and use another device—an ad-free set-top box like an Apple TV, for instance—to access apps. Sure, a smart TV is dead simple to use, but the privacy trade-offs might wind up being too great.

Source: LG Announces New Ad Targeting Features for TVs

How normal am I? – Let an AI judge you

This is an art project by Tijmen Schep that shows how face detection algoritms are increasingly used to judge you. It was made as part of the European Union’s Sherpa research program.

No personal data is sent to our server in any way. Nothing. Zilch. Nada. All the face detection algorithms will run on your own computer, in the browser.

In this ‘test’ your face is compared with that of all the other people who came before you. At the end of the show you can, if you want to, share some anonimized data. That will then be used to re-calculate the new average. That anonymous data is not shared any further.

Source: How normal am I?

How to Download Everything Amazon Knows About You (It’s a Lot)

[…]To be clear, data collection is far from an Amazon-specific problem; it’s pretty much par for the course when it comes to tech companies. Even Apple, a company vocal about user privacy, has faced criticism in the past for recording Siri interactions and sharing them with third-party contractors.

The issue with Amazon, however, is the extent to which they collect and archive your data. Just about everything you do on, with, and around an Amazon product or service is logged and recorded. Sure, you might not be surprised to learn that when you visit Amazon’s website, the company logs your browsing history and shopping data. But it goes far beyond that. Since Amazon owns Whole Foods, it also saves your shopping history there. When you watch video content through its platforms, it records all of that information, too.

Things get even creepier with other Amazon products. If you read books on a Kindle, Amazon records your reading activity, including the speed of your page turns (I wonder if Bezos prefers a slow or fast page flip); if you peered into your Amazon data, you might find something similar to what a Reuter’s reporter found: On Aug. 8 2020, someone on that account read The Mitchell Sisters: A Complete Romance Series from 4:52 p.m. through 7:36 p.m., completing 428 pages. (Nice sprint.)

If you have one of Amazon’s smart speakers, you’re on the record with everything you’ve ever uttered to the device: When you ask Alexa a question or give it a command, Amazon saves the audio files for the entire interaction. If you know how to access you data, you can listen to every one of those audio files, and relive moments you may or may not have realized were recorded.

Another Reuters reporter found Amazon saved over 90,000 recordings over a three-and-a-half-year period, which included the reporter’s children asking Alexa questions, recordings of those same children apologizing to their parents, and, in some cases, extended conversations that were outside the scope of a reasonable Alexa query.

Unfortunately, while you can access this data, Amazon doesn’t make it possible to delete much of it. You can tweak your privacy settings you stop your devices from recording quite as much information. However, once logged, the main strategy to delete it is to delete the entire account it is associated with. But even if you can’t delete the data while sticking with your account, you do have a right to see what data Amazon has on you, and it’s simple to request.

How to download all of your Amazon data

To start, , or go to Amazon’s Help page. You’ll find the link under Security and Privacy > More in Security & Privacy > Privacy > How Do I Request My Data? Once there, click the “Request My Data” link.

From the dropdown menu, choose the data you want from Amazon. If you want everything, choose “Request All Your Data.” Hit “Submit Request,” then click the validation link in your email. That’s it. Amazon makes it easy to see what the have on you, probably because they know you can’t do anything about it.

[Reuters]

Source: How to Download Everything Amazon Knows About You (It’s a Lot)

German IT security watchdog: No evidence of censorship function in Xiaomi phones

Germany’s federal cybersecurity watchdog, the BSI, did not find any evidence of censorship functions in mobile phones manufactured by China’s Xiaomi Corp (1810.HK), a spokesperson said on Thursday.

Lithuania’s state cybersecurity body had said in September that Xiaomi phones had a built-in ability to detect and censor terms such as “Free Tibet”, “Long live Taiwan independence” or “democracy movement”. The BSI started an examination following these accusations, which lasted several months. read more

“As a result, the BSI was unable to identify any anomalies that would require further investigation or other measures,” the BSI spokesperson said.

Source: German IT security watchdog: No evidence of censorship function in Xiaomi phones | Reuters

This App Will Tell Android Users If an AirTag Is Tracking Them

Apple’s AirTags and Find My service can be helpful for finding things you lose—but they also introduce a big privacy problem. While those of us on iOS have had some tools for fighting those issues, Apple left those of us on Android without much to work with. A new Android AirTag finder app finally addresses some of those concerns.

How AirTags work

[…]

The Find My network employs the passive use of hundreds of millions of Apple devices to help expand your search. That way, you can locate your lost items even if they’re too far away for traditional wireless tracking. Your lost AirTag may be out of your own phone’s Bluetooth range, but it may not be far from another Apple device.

[…]

The Tracker Detect app comes out of a need for better security in the Find My network. Having such a wide network to track a tiny, easy-to-miss device could make it easy for someone to use AirTags to track someone.

People pointed out this vulnerability pretty soon after Apple announced the AirTags. With more than 113 million iPhones in the U.S., not to mention other Apple devices, the Find My network could be one of the widest tracking systems available. A device as small and easy-to-use as an AirTag on that network could make stalking easier than ever.

That said, Apple has a built-in feature designed to prevent tracking. If your iPhone senses that a strange AirTag, separated from its owner, is following you, it will send you an alert. If that AirTag is not found, it will start to make a sound anywhere from 8 to 24 hours after being separated from its owner.

However, Android users haven’t had these protections. That’s where Tracker Detect comes in; with this new Android AirTag app, you can scan the area to see if anyone may be tracking your location with an AirTag or other Find My-enabled accessory.

How to use Tracker Detect

If you’re concerned about people tracking you, download the Tracker Detect app from the Google Play Store. You don’t need an Apple account or any Apple devices to use it.

The app won’t scan automatically, so you’ll have to look for devices manually. To do that, open the app and tap Scan. Apple says it may take up to 15 minutes to find an AirTag that’s separated from its owner. You can tap Stop Scanning to end the search if you feel safe, and if the app detects something, it will mark it as Unknown AirTag.

Once the app has detected an AirTag, you can have it play a sound through the tag for up to ten minutes to help you find it. When you find the AirTag, you can scan it with an NFC reader to learn more about it.

[…]

 

Source: This App Will Tell Android Users If an AirTag Is Tracking Them

Banks, ISPs Increasingly Embrace ‘Voice Print’ Authentication Despite Growing Security Risk

While it’s certainly possible to sometimes do biometrics well, a long line of companies frequently… don’t. Voice print authentication is particularly shaky, especially given the rise of inexpensive voice deepfake technology. But, much like the continued use of text-message two-factor authentication (which is increasingly shown to not be secure), it apparently doesn’t matter to a long list of companies.

Banks and telecom giants alike have started embracing voice authentication tech at significant scale despite the added threat to user privacy and security. And they’re increasingly collecting user “voice print” data without any way to opt out:

“despite multiple high-profile cases of scammers successfully stealing money by impersonating people via deepfake audio, big banks and ISPs are rolling out voice-based authentication at scale. The worst offender that I could find is Chase. There is no “opt in”. There doesn’t even appear to be a formal way to “opt out”! There is literally no way for me to call my bank without my voice being “fingerprinted” without my consent.”

[…]

Source: Banks, ISPs Increasingly Embrace ‘Voice Print’ Authentication Despite Growing Security Risk | Techdirt

Apple Removes All References to Controversial CSAM Scanning Feature – where they would scan all the pictures you took

Apple has quietly nixed all mentions of CSAM from its Child Safety webpage, suggesting its controversial plan to detect child sexual abuse images on iPhones and iPads may hang in the balance following significant criticism of its methods.

Apple in August announced a planned suite of new child safety features, including scanning users’ iCloud Photos libraries for Child Sexual Abuse Material (CSAM), Communication Safety to warn children and their parents when receiving or sending sexually explicit photos, and expanded CSAM guidance in Siri and Search.

Following their announcement, the features were criticized by a wide range of individuals and organizations, including security researchers, the privacy whistleblower Edward Snowden, the Electronic Frontier Foundation (EFF), Facebook’s former security chief, politicians, policy groups, university researchers, and even some Apple employees.

The majority of criticism was leveled at Apple’s planned on-device CSAM detection, which was lambasted by researchers for relying on dangerous technology that bordered on surveillance, and derided for being ineffective at identifying images of child sexual abuse.

[…]

Source: Apple Removes All References to Controversial CSAM Scanning Feature From Its Child Safety Webpage [Updated] – MacRumors

Report: VPNs Are Often a Mixed Bag for Privacy

[…] Consumer Reports, which recently published a 48-page white paper on VPNs that looks into the privacy and security policies of 16 prominent VPN providers. Researchers initially looked into some 51 different companies but ultimately honed in on the most prominent, high-quality providers. The results are decidedly mixed, with the report highlighting a lot of the long offered criticisms of the industry—namely, it’s lack of transparency, its PR bullshit, and its not always stellar security practices. On the flip side, a small coterie of VPNs actually seem pretty good.

[…]

. Consumers may often believe that by using a VPN they are able to become completely invisible online, as companies promise stuff like “unrivaled internet anonymity,” and the ability to “keep your browsing private and protect yourself from hackers and online tracking,” and so on and so forth.

In reality, there are still a whole variety of ways that companies and advertisers can track you across the internet—even if your IP address is hidden behind a virtual veil.

[…]

via a tool developed by a group of University of Michigan researchers, dubbed the “VPNalyzer” test suite, which was able to look at various security issues with VPN connections. The research team found that “malicious and deceptive behaviors by VPN providers such as traffic interception and manipulation are not widespread but are not nonexistent. In total, the VPNalyzer team filed more than 29 responsible disclosures, 19 of which were for VPNs also studied in this report, and is awaiting responses regarding its findings.”

The CR’s own analysis found “little evidence” of VPNs “manipulating users’ networking traffic when testing for evidence of TLS interception,” though they did occasionally run into examples of data leakage.

And, as should hopefully go without saying, any VPN with the word “free” near it should be avoided at all costs, lest you accidentally download some sort of Trojan onto your device and casually commit digital hari-kari.

[…]

According to CR’s review, four VPN providers rose to the top of the list in terms of their privacy and security practices. They were:

Apparently in that order.

These companies stood out mostly by not over-promising what they could deliver, while also scoring high on scales of transparency and security

[…]

Source: Report: VPNs Are Often a Mixed Bag for Privacy

Prisons snoop on inmates’ phone calls with speech-to-text AI

Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners’ phone calls.

In a sales pitch, LEO’s CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

“(The) sheriff believes (the calls) will help him fend off pending liability via civil action from inmates and activists,” Sexton said. Verus transcribes phone calls and finds certain keywords discussing issues like COVID-19 outbreaks or other complaints about jail conditions.

Prisoners, however, said the tool was used to catch crime. In one case, it allegedly found one inmate illegally collecting unemployment benefits. But privacy advocates aren’t impressed. “T​​he ability to surveil and listen at scale in this rapid way – it is incredibly scary and chilling,” said Julie Mao, deputy director at Just Futures Law, an immigration legal group.

[…]

Source: Prisons snoop on inmates’ phone calls with speech-to-text AI • The Register

Executive at Swiss Tech Company Said to Operate Secret Surveillance Operation

The co-founder of a company that has been trusted by technology giants including Google and Twitter to deliver sensitive passwords to millions of their customers also operated a service that ultimately helped governments secretly surveil and track mobile phones, Bloomberg reported Monday, citing former employees and clients. From the report: Since it started in 2013, Mitto AG has established itself as a provider of automated text messages for such things as sales promotions, appointment reminders and security codes needed to log in to online accounts, telling customers that text messages are more likely to be read and engaged with than emails as part of their marketing efforts. Mitto, a closely held company with headquarters in Zug, Switzerland, has grown its business by establishing relationships with telecom operators in more than 100 countries. It has brokered deals that gave it the ability to deliver text messages to billions of phones in most corners of the world, including countries that are otherwise difficult for Western companies to penetrate, such as Iran and Afghanistan. Mitto has attracted major technology giants as customers, including Google, Twitter, WhatsApp, Microsoft’s LinkedIn and messaging app Telegram, in addition to China’s TikTok, Tencent and Alibaba, according to Mitto documents and former employees.

But a Bloomberg News investigation, carried out in collaboration with the London-based Bureau of Investigative Journalism, indicates that the company’s co-founder and chief operating officer, Ilja Gorelik, was also providing another service: selling access to Mitto’s networks to secretly locate people via their mobile phones. That Mitto’s networks were also being used for surveillance work wasn’t shared with the company’s technology clients or the mobile operators Mitto works with to spread its text messages and other communications, according to four former Mitto employees. The existence of the alternate service was known only to a small number of people within the company, these people said. Gorelik sold the service to surveillance-technology companies which in turn contracted with government agencies, according to the employees.

Source: Executive at Swiss Tech Company Said to Operate Secret Surveillance Operation – Slashdot

Life360 Reportedly Sells Location Data of Families and Kids

Life360, a popular tracking app that bills itself as “the world’s leading family safety service,” is purportedly selling location data on the 31 million families and kids that use it to data brokers. The chilling revelation may make users of the Tile Bluetooth tracker, which is being bought by Life360, think twice before continuing to use the device.

Life360’s data selling practices were revealed in a damning report published by the Markup on Monday. The report claims that Life360 sells location data on its users to roughly a dozen data brokers, some of which have sold data to U.S. government contractors. The data brokers then proceed to sell the location data to “virtually anyone who wants to buy it.” Life360 is purportedly one of the largest sources of data for the industry, the outlet found.

While selling location data on families and kids is already alarming, what’s even more frightening is that Life360 is purportedly failing to take steps to protect the privacy of the data it sells. This could potentially allow the location data, which the company says is anonymized, to be linked back to the people it belongs to.

[…]

Source: Life360 Reportedly Sells Location Data of Families and Kids

Documents Shows Just How Much The FBI Can Obtain From Encrypted Communication Services

There is no “going dark.” Consecutive FBI heads may insist there is, but a document created by their own agency contradicts their dire claims that end-to-end encryption lets the criminals and terrorists win.

Andy Kroll has the document and the details for Rolling Stone:

[I]n a previously unreported FBI document obtained by Rolling Stone, the bureau claims that it’s particularly easy to harvest data from Facebook’s WhatsApp and Apple’s iMessage services, as long as the FBI has a warrant or subpoena. Judging by this document, “the most popular encrypted messaging apps iMessage and WhatsApp are also the most permissive,” according to Mallory Knodel, the chief technology officer at the Center for Democracy and Technology.

The document [PDF] shows what can be obtained from which messaging service, with the FBI noting WhatsApp has plenty of information investigators can obtain, including almost real time collection of communications metadata.

WhatsApp will produce certain user metadata, though not actual message content, every 15 minutes in response to a pen register, the FBI says. The FBI guide explains that most messaging services do not or cannot do this and instead provide data with a lag and not in anything close to real time: “Return data provided by the companies listed below, with the exception of WhatsApp, are actually logs of latent data that are provided to law enforcement in a non-real-time manner and may impact investigations due to delivery delays.”

The FBI can obtain this info with a pen register order — the legal request used for years to obtain ongoing call data on targeted numbers, including numbers called and length of conversations. With a warrant, the FBI can get even more information. A surprising amount, actually. According to the document, WhatsApp turns over address book contacts for targeted users as well as other WhatsApp users who happen to have the targeted person in their address books.

Combine this form of contact chaining with a few pen register orders, and the FBI can basically eavesdrop on hundreds of conversations in near-real time. The caveat, of course, is that the FBI has no access to the content of the conversations. That remains locked up by WhatsApp’s encryption. Communications remain “warrant-proof,” to use a phrase bandied about by FBI directors. But is it really?

If investigators are able to access the contents of a phone (by seizing the phone or receiving permission from someone to view their end of conversations), encryption is no longer a problem. That’s one way to get past the going darkness. Then there’s stuff stored in the cloud, which can give law enforcement access to communications despite the presence of end-to-end encryption. Backups of messages might not be encrypted and — as the document points out — a warrant will put those in the hands of law enforcement.

If target is using an iPhone and iCloud backups enabled, iCloud returns may contain WhatsApp data, to include message content.

[…]

Source: Documents Shows Just How Much The FBI Can Obtain From Encrypted Communication Services | Techdirt

Qualcomm’s new always-on smartphone camera is always looking out for you

“Your phone’s front camera is always securely looking for your face, even if you don’t touch it or raise to wake it.” That’s how Qualcomm Technologies vice president of product management Judd Heape introduced the company’s new always-on camera capabilities in the Snapdragon 8 Gen 1 processor set to arrive in top-shelf Android phones early next year.

[…]

But for those of us with any sense of how modern technology is used to violate our privacy, a camera on our phone that’s always capturing images even when we’re not using it sounds like the stuff of nightmares and has a cost to our privacy that far outweighs any potential convenience benefits.

Qualcomm’s main pitch for this feature is for unlocking your phone any time you glance at it, even if it’s just sitting on a table or propped up on a stand. You don’t need to pick it up or tap the screen or say a voice command — it just unlocks when it sees your face. I can see this being useful if your hands are messy or otherwise occupied (in its presentation, Qualcomm used the example of using it while cooking a recipe to check the next steps). Maybe you’ve got your phone mounted in your car, and you can just glance over at it to see driving directions without having to take your hands off the steering wheel or leave the screen on the entire time.

[…]

Qualcomm is framing the always-on camera as similar to the always-on microphones that have been in our phones for years. Those are used to listen for voice commands like “Hey Siri” or “Hey Google” (or lol, “Hi Bixby”) and then wake up the phone and provide a response, all without you having to touch or pick up the phone. But the difference is that they are listening for specific wake words and are often limited with what they can do until you do actually pick up your phone and unlock it.

It feels a bit different when it’s a camera that’s always scanning for a likeness.

It’s true that smart home products already have features like this. Google’s Nest Hub Max uses its camera to recognize your face when you walk up to it and greet you with personal information like your calendar. Home security cameras and video doorbells are constantly on, looking for activity or even specific faces. But those devices are in your home, not always carried with you everywhere you go, and generally don’t have your most private information stored on them, like your phone does. They also frequently have features like physical shutters to block the camera or intelligent modes to disable recording when you’re home and only resume it when you aren’t. It’s hard to imagine any phone manufacturer putting a physical shutter on the front of their slim and sleek flagship smartphone.

Lastly, there have been many reports of security breaches and social engineering hacks to enable smart home cameras when they aren’t supposed to be on and then send that feed to remote servers, all without the knowledge of the homeowner. Modern smartphone operating systems now do a good job of telling you when an app is accessing your camera or microphone while you’re using the device, but it’s not clear how they’d be able to inform you of a rogue app tapping into the always-on camera.

To be honest, these things are also pretty damn scary! I understand that Americans have been habituated to ubiquitous surveillance, but here in the EU we still value our privacy and don’t like it much at all.

Ultimately, it comes down to a level of trust — do you trust that Qualcomm has set up the system in a way that prevents the always-on camera from being used for other purposes than intended? Do you trust that the OEM using Qualcomm’s chips won’t do things to interfere with the system, either for their own profit or to satisfy the demands of a government entity?

Even if you do have that trust, there’s a certain level of comfort with an always-on camera on your most personal device that goes beyond where we are currently.

Maybe we’ll just start having to put tape on our smartphone cameras like we already do with laptop webcams.

Source: Qualcomm’s new always-on smartphone camera is a potential privacy nightmare – The Verge

WhatsApp privacy policy tweaked in Europe after record fine

Following an investigation, the Irish data protection watchdog issued a €225m (£190m) fine – the second-largest in history over GDPR – and ordered WhatsApp to change its policies.

WhatsApp is appealing against the fine, but is amending its policy documents in Europe and the UK to comply.

However, it insists that nothing about its actual service is changing.

Instead, the tweaks are designed to “add additional detail around our existing practices”, and will only appear in the European version of the privacy policy, which is already different from the version that applies in the rest of the world.

“There are no changes to our processes or contractual agreements with users, and users will not be required to agree to anything or to take any action in order to continue using WhatsApp,” the company said, announcing the change.

The new policy takes effect immediately.

User revolt

In January, WhatsApp users complained about an update to the company’s terms that many believed would result in data being shared with parent company Facebook, which is now called Meta.

Many thought refusing to agree to the new terms and conditions would result in their accounts being blocked.

In reality, very little had changed. However, WhatsApp was forced to delay its changes and spend months fighting the public perception to the contrary.

During the confusion, millions of users downloaded WhatsApp competitors such as Signal.

[…]

The new privacy policy contains substantially more information about what exactly is done with users’ information, and how WhatsApp works with Meta, the parent company for WhatsApp, Facebook and Instagram.

Source: WhatsApp privacy policy tweaked in Europe after record fine – BBC News

The Amazon lobbyists who kill U.S. consumer privacy protections

In recent years, Amazon.com Inc has killed or undermined privacy protections in more than three dozen bills across 25 states, as the e-commerce giant amassed a lucrative trove of personal data on millions of American consumers.

Amazon executives and staffers detail these lobbying victories in confidential documents reviewed by Reuters.

In Virginia, the company boosted political donations tenfold over four years before persuading lawmakers this year to pass an industry-friendly privacy bill that Amazon itself drafted. In California, the company stifled proposed restrictions on the industry’s collection and sharing of consumer voice recordings gathered by tech devices. And in its home state of Washington, Amazon won so many exemptions and amendments to a bill regulating biometric data, such as voice recordings or facial scans, that the resulting 2017 law had “little, if any” impact on its practices, according to an internal Amazon document.

[…]

Source: The Amazon lobbyists who kill U.S. consumer privacy protections

This is a detailed and creepy look at how Amazon undermines protections in the US and the amount and scope of data they collect.

South Korea Is Giving Millions of Photos of all foreign travelers since 2019 to Facial Recognition Researchers

The South Korean Ministry of Justice has provided more than 100 million photos of foreign nationals who travelled through the country’s airports to facial recognition companies without their consent, according to attorneys with the non-governmental organization Lawyers for a Democratic Society.

While the use of facial recognition technology has become common for governments across the world, advocates in South Korea are calling the practice a “human rights disaster” that is relatively unprecedented.

“It’s unheard-of for state organizations—whose duty it is to manage and control facial recognition technology—to hand over biometric information collected for public purposes to a private-sector company for the development of technology,” six civic groups said during a press conference last week.

The revelation, first reported in the South Korean newspaper The Hankyoreh, came to light after National Assembly member Park Joo-min requested and received documents from the Ministry of Justice related to a April 2019 project titled Artificial Intelligence and Tracking System Construction Project. The documents show private companies secretly used biometric data to research and develop an advanced immigration screening system that would utilize artificial intelligence to automatically identify airport users’ identities through CCTV surveillance cameras and detect dangerous situations in real time.

Shortly after the discovery, civil liberty groups announced plans to represent both foreign and domestic victims in a lawsuit.

[…]

Despite this pushback, the use of the technology is increasingly used in commercial spaces and airports. This holiday season, Delta Airlines will be piloting a facial recognition boarding program in Atlanta, following similar moves by JetBlue. US Customs and Border Protection is already relying on facial recognition technology in dozens of locations.

While the South Korean government’s collaboration with the private sector is unprecedented in its scale, it  is not the only collaboration of its kind. In 2019, a Motherboard investigation revealed the Departments of Motor Vehicles in numerous states had been selling names, addresses and other personal data to insurance or tow companies and to private investigators.

Source: South Korea Is Giving Millions of Photos to Facial Recognition Researchers

How to Stop Chrome From Sharing Your Motion Data on Android

[…] Mysk, a duo of app developers and security researchers, recently exposed Chrome’s shadiness on Twitter. In the tweet, Mysk brings to light that, by default, Chrome is sharing your phone’s motion data with the websites you visit. This is not cool.

Why you don’t want third parties accessing your motion data

To start with, this is—as I have pointed out—creepy af. The data comes from your phone’s accelerometer, the sensor responsible for tracking the device’s orientation and position. That sensor makes it possible to switch from portrait to landscape mode, as well as track you and your phone’s motion. For example, it empowers fitness apps to know how many steps you took, so long as you had your phone on you.

Since most of us keep our phones in our pocket or on our person, there is a lot of motion data generated on the device throughout the day. Google Chrome, by design, allows any website you click on to request that motion data, and hands it over with gusto. Researchers have found that these sites use accelerometer data to monitor ad interactions, check ad impressions, and to track your device (well, duh). Those first two, however, are infuriatingly sketchy; websites don’t just want to know if you’ll click on an ad or not, they want to know how you physically interact with these popups. Hey, why stop there? Why not tap into my camera and see what color shirt I’m wearing?

How to stop Chrome from sharing motion data with sites

Delete the app from your phone. Kidding. I know the vast majority of people on Android aren’t going to want to switch from Chrome to another mobile browser. That said, privacy-minded users might want to jump ship to something more reputable—like Firefox—and, if so, good for you.

But there are plenty of benefits to sticking with Chrome, especially on Android (considering the platform is also designed and operated by Google). If you don’t want to take the most drastic step, you can simply toggle a setting to block Google from sharing this data. As Mysk points out in their tweet, you can disable motion-data-sharing from Chrome’s settings.

Here’s how: Open the app, tap the three dots in the top-right corner, then choose “Settings.” Next, scroll down, tap “Site settings,” then “Motion sensors.” Turn off the toggle here to make sure no more third-party sites can ask for your motion data from here on out.

Source: How to Stop Chrome From Sharing Your Motion Data on Android

Microsoft will now snitch on you at work like never before

[…]

this news again comes courtesy of Microsoft’s roadmap service, where Redmond prepares you for the joys to come.

This time, there are a couple of joys.

The first is headlined: “Microsoft 365 compliance center: Insider risk management — Increased visibility on browsers.”

It all sounded wonderful until you those last four words, didn’t it? For this is the roadmap for administrators. And when you give a kindly administrator “increased visibility on browsers,” you can feel sure this means an elevated level of surveillance of what employees are typing into those browsers.

In this case, Microsoft is targeting “risky activity.” Which, presumably, has some sort of definition. It offers a link to its compliance center, where the very first sentence has whistleblower built in: “Web browsers are often used by users to access both sensitive and non-sensitive files within an organization.”

And what is the compliance center monitoring? Why, “files copied to personal cloud storage, files printed to local or network devices, files transferred or copied to a network share, files copied to USB devices.”

You always assumed this was the case? Perhaps. But now there will be mysteriously increased visibility.

“How might this visibility be increased?,” I hear you shudder. Well, there’s another little roadmap update that may, just may, offer a clue.

This one proclaims: “Microsoft 365 compliance center: Insider risk management — New ML detectors.”

Yes, your company will soon have extra-special robots to crawl along after you and observe your every “risky” action. It’s not enough to have increased visibility on browsers. You must also have Machine Learning constantly alert for someone revealing your lunch schedule.

Microsoft offers a link to its Insider Risk Management page. This enjoys some delicious phrasing: “Customers acknowledge insights related to the individual user’s behavior, character, or performance materially related to employment can be calculated by the administrator and made available to others in the organization.”

Yes, even your character is being examined here.

[…]

Source: Microsoft will now snitch on you at work like never before | ZDNet