The Linkielist

Linking ideas with the world

The Linkielist

Brave’s De-AMP feature bypasses harmful Google AMP pages

Brave announced a new feature for its browser on Tuesday: De-AMP, which automatically jumps past any page rendered with Google’s Accelerated Mobile Pages framework and instead takes users straight to the original website. “Where possible, De-AMP will rewrite links and URLs to prevent users from visiting AMP pages altogether,” Brave said in a blog post. “And in cases where that is not possible, Brave will watch as pages are being fetched and redirect users away from AMP pages before the page is even rendered, preventing AMP / Google code from being loaded and executed.”

Brave framed De-AMP as a privacy feature and didn’t mince words about its stance toward Google’s version of the web. “In practice, AMP is harmful to users and to the Web at large,” Brave’s blog post said, before explaining that AMP gives Google even more knowledge of users’ browsing habits, confuses users, and can often be slower than normal web pages. And it warned that the next version of AMP — so far just called AMP 2.0 — will be even worse.

Brave’s stance is a particularly strong one, but the tide has turned hard against AMP over the last couple of years. Google originally created the framework in order to simplify and speed up mobile websites, and AMP is now managed by a group of open-source contributors. It was controversial from the very beginning and smelled to some like Google trying to exert even more control over the web. Over time, more companies and users grew concerned about that control and chafed at the idea that Google would prioritize AMP pages in search results. Plus, the rest of the internet eventually figured out how to make good mobile sites, which made AMP — and similar projects like Facebook Instant Articles — less important.

A number of popular apps and browser extensions make it easy for users to skip over AMP pages, and in recent years, publishers (including The Verge’s parent company Vox Media) have moved away from using it altogether. AMP has even become part of the antitrust fight against Google: a lawsuit alleged that AMP helped centralize Google’s power as an ad exchange and that Google made non-AMP ads load slower.

[…]

Source: Brave’s De-AMP feature bypasses ‘harmful’ Google AMP pages – The Verge

Cisco’s Webex phoned home audio telemetry even when muted

Boffins at two US universities have found that muting popular native video-conferencing apps fails to disable device microphones – and that these apps have the ability to access audio data when muted, or actually do so.

The research is described in a paper titled, “Are You Really Muted?: A Privacy Analysis of Mute Buttons in Video Conferencing Apps,” [PDF] by Yucheng Yang (University of Wisconsin-Madison), Jack West (Loyola University Chicago), George K. Thiruvathukal (Loyola University Chicago), Neil Klingensmith (Loyola University Chicago), and Kassem Fawaz (University of Wisconsin-Madison).

The paper is scheduled to be presented at the Privacy Enhancing Technologies Symposium in July.

[…]

Among the apps studied – Zoom (Enterprise), Slack, Microsoft Teams/Skype, Cisco Webex, Google Meet, BlueJeans, WhereBy, GoToMeeting, Jitsi Meet, and Discord – most presented only limited or theoretical privacy concerns.

The researchers found that all of these apps had the ability to capture audio when the mic is muted but most did not take advantage of this capability. One, however, was found to be taking measurements from audio signals even when the mic was supposedly off.

“We discovered that all of the apps in our study could actively query (i.e., retrieve raw audio) the microphone when the user is muted,” the paper says. “Interestingly, in both Windows and macOS, we found that Cisco Webex queries the microphone regardless of the status of the mute button.”

They found that Webex, every minute or so, sends network packets “containing audio-derived telemetry data to its servers, even when the microphone was muted.”

[…]

Worse still from a security standpoint, while other apps encrypted their outgoing data stream before sending it to the operating system’s socket interface, Webex did not.

“Only in Webex were we able to intercept plaintext immediately before it is passed to the Windows network socket API,” the paper says, noting that the app’s monitoring behavior is inconsistent with the Webex privacy policy.

The app’s privacy policy states Cisco Webex Meetings does not “monitor or interfere with you your [sic] meeting traffic or content.”

[…]

Source: Cisco’s Webex phoned home audio telemetry even when muted • The Register

Mega-Popular Muslim Prayer Apps Were Secretly Harvesting Phone Numbers

Google recently booted over a dozen apps from its Play Store—among them Muslim prayer apps with 10 million-plus downloads, a barcode scanner, and a clock—after researchers discovered secret data-harvesting code hidden within them. Creepier still, the clandestine code was engineered by a company linked to a Virginia defense contractor, which paid developers to incorporate its code into their apps to pilfer users’ data.

While conducting research, researchers came upon a piece of code that had been implanted in multiple apps that was being used to siphon off personal identifiers and other data from devices. The code, a software development kit, or SDK, could “without a doubt be described as malware,” one researcher said.

For the most part, the apps in question appear to have served basic, repetitive functions—the sort that a person might download and then promptly forget about. However, once implanted onto the user’s phone, the SDK-laced programs harvested important data points about the device and its users like phone numbers and email addresses, researchers revealed.

The Wall Street Journal originally reported that the weird, invasive code, was discovered by a pair of researchers, Serge Egelman, and Joel Reardon, both of whom co-founded an organization called AppCensus, which audits mobile apps for user privacy and security. In a blog post on their findings, Reardon writes that AppCensus initially reached out to Google about their findings in October of 2021. However, the apps ultimately weren’t expunged from the Play store until March 25 after Google had investigated, the Journal reports

[…]

Source: Mega-Popular Muslim Prayer Apps Were Secretly Harvesting Phone Numbers

EU, US strike preliminary deal to unlock transatlantic data flows – yup, the EU will let the US spy on it’s citizens freely again

Negotiators have been working on an agreement — which allows Europeans’ personal data to flow to the United States — since the EU’s top court struck down the Privacy Shield agreement in July 2020 because of fears that the data was not safe from access by American agencies once transferred across the Atlantic.

The EU chief’s comments Friday show both sides have reached a political breakthrough, coinciding with U.S. President Joe Biden’s visit to Brussels this week.

“I am pleased that we found an agreement in principle on a new framework for transatlantic data flows. This will enable predictable and trustworthy data flows between the EU and U.S., safeguarding privacy and civil liberties,” she said.

Biden said the framework would allow the EU “to once again authorize transatlantic data flows that help facilitate $7.1 trillion in economic relationships.”

Friday’s announcement will come as a relief to the hundreds of companies that had faced mounting legal uncertainty over how to shuttle everything from payroll information to social media post data to the U.S.

Officials on both sides of the Atlantic had been struggling to bridge an impasse over what it means to give Europeans’ effective legal redress against surveillance by U.S. authorities. Not all of those issues have been resolved, though von der Leyen’s comments Friday suggest technical solutions are within reach.

Despite the ripples of relief Friday’s announcement will send through the business community, any deal is likely to be challenged in the courts by privacy campaigners.

Source: EU, US strike preliminary deal to unlock transatlantic data flows – POLITICO

Messages, Dialer apps sent text, call info to Google

Google’s Messages and Dialer apps for Android devices have been collecting and sending data to Google without specific notice and consent, and without offering the opportunity to opt-out, potentially in violation of Europe’s data protection law.

According to a research paper, “What Data Do The Google Dialer and Messages Apps On Android Send to Google?” [PDF], by Trinity College Dublin computer science professor Douglas Leith, Google Messages (for text messaging) and Google Dialer (for phone calls) have been sending data about user communications to the Google Play Services Clearcut logger service and to Google’s Firebase Analytics service.

“The data sent by Google Messages includes a hash of the message text, allowing linking of sender and receiver in a message exchange,” the paper says. “The data sent by Google Dialer includes the call time and duration, again allowing linking of the two handsets engaged in a phone call. Phone numbers are also sent to Google.”

The timing and duration of other user interactions with these apps has also been transmitted to Google. And Google offers no way to opt-out of this data collection.

[…]

From the Messages app, Google takes the message content and a timestamp, generates a SHA256 hash, which is the output of an algorithm that maps the human readable content to an alphanumeric digest, and then transmits a portion of the hash, specifically a truncated 128-bit value, to Google’s Clearcut logger and Firebase Analytics.

Hashes are designed to be difficult to reverse, but in the case of short messages, Leith said he believes some of these could be undone to recover some of the message content.

“I’m told by colleagues that yes, in principle this is likely to be possible,” Leith said in an email to The Register today. “The hash includes a hourly timestamp, so it would involve generating hashes for all combinations of timestamps and target messages and comparing these against the observed hash for a match – feasible I think for short messages given modern compute power.”

The Dialer app likewise logs incoming and outgoing calls, along with the time and the call duration.

[…]

The paper describes nine recommendations made by Leith and six changes Google has already made or plans to make to address the concerns raised in the paper. The changes Google has agreed to include:

  • Revising the app onboarding flow so that users are notified they’re using a Google app and are presented with a link to Google’s consumer privacy policy.
  • Halting the collection of the sender phone number by the CARRIER_SERVICES log source, of the 5 SIM ICCID, and of a hash of sent/received message text by Google Messages.
  • Halting the logging of call-related events in Firebase Analytics from both Google Dialer and Messages.
  • Shifting more telemetry data collection to use the least long-lived identifier available where possible, rather than linking it to a user’s persistent Android ID.
  • Making it clear when caller ID and spam protection is turned on and how it can be disabled, while also looking at way to use less information or fuzzed information for safety functions.

[…]

Leith said there are two larger matters related to Google Play Service, which is installed on almost all Android phones outside of China.

“The first is that the logging data sent by Google Play Services is tagged with the Google Android ID which can often be linked to a person’s real identity – so the data is not anonymous,” he said. “The second is that we know very little about what data is being sent by Google Play Services, and for what purpose(s). This study is the first to cast some light on that, but it’s very much just the tip of the iceberg.”

Source: Messages, Dialer apps sent text, call info to Google • The Register

HBO hit with class action lawsuit for allegedly sharing subscriber data with Facebook

HBO is facing a class action lawsuit over allegations that it gave subscribers’ viewing history to Facebook without proper permission, Variety has reported. The suit accuses HBO of providing Facebook with customer lists, allowing the social network to match viewing habits with their profiles.

It further alleges that HBO knows Facebook can combine the data because HBO is a major Facebook advertiser — and Facebook can then use that information to retarget ads to its subscribers. Since HBO never received proper customer consent to do this, it allegedly violated the 1988 Video Privacy Protection Act (VPPA), according to the lawsuit.

HBO, like other sites, discloses to users that it (and partners) use cookies to deliver personalized ads. However, the VPPA requires separate consent from users to share their video viewing history. “A standard privacy policy will not suffice,” according to the suit.

Other streaming providers have been hit with similar claims, and TikTok recently agreed to pay a $92 million settlement for (in part) violating the VPPA. In another case, however, a judge ruled in 2015 that Hulu didn’t knowingly share data with Facebook that could establish an individual’s viewing history. The law firm involved in the HBO suit previously won a $50 million settlement with Hearst after alleging that it violated Michigan privacy laws by selling subscriber data.

Source: HBO hit with class action lawsuit for allegedly sharing subscriber data with Facebook | Engadget

Italy slaps creepy webscraping facial recognition firm Clearview AI with €20 million fine

Italy’s data privacy watchdog said it will fine the controversial facial recognition firm Clearview AI for breaching EU law. An investigation by Garante, Italy’s data protection authority, found that the company’s database of 10 billion images of faces includes those of Italians and residents in Italy. The New York City-based firm is being fined €20 million, and will also have to delete any facial biometrics it holds of Italian nationals.

This isn’t the first time that the beleaguered facial recognition tech company is facing legal consequences. The UK data protection authority last November fined the company £17 million after finding its practices—which include collecting selfies of people without their consent from security camera footage or mugshots—violate the nation’s data protection laws. The company has also been banned in Sweden, France and Australia.

The accumulated fines will be a considerable blow for the now five-year old company, completely wiping away the $30 million it raised in its last funding round. But Clearview AI appears to be just getting started. The company is on track to patent its biometric database, which scans faces across public internet data and has been used by law enforcement agencies around the world, including police departments in the United States and a number of federal agencies. A number of Democrats have urged federal agencies to drop their contracts with Clearview AI, claiming that the tool is a severe threat to the privacy of everyday citizens. In a letter to the Department of Homeland Security, Sens. Ed Markey and Jeff Merkley and Reps. Pramila Jayapal and Ayanna Pressley urged regulators to discontinue their use of the tool.

“Clearview AI reportedly scrapes billions of photos from social media sites without permission from or notice to the pictured individuals. In conjunction with the company’s facial recognition capabilities, this trove of personal information is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified,” wrote the authors of the letter.

Despite losing troves of facial recognition data from entire countries, Clearview AI has a plan to rapidly expand this year. The company told investors that it is on track to have 100 billion photos of faces in its database within a year, reported The Washington Post. In its pitch deck, the company said it hopes to secure an additional $50 million from investors to build even more facial recognition tools and ramp up its lobbying efforts.

Source: Italy slaps facial recognition firm Clearview AI with €20 million fine | Engadget

UK Online Safety Bill to require more data to use social media – eg send them your passport

The country’s forthcoming Online Safety Bill will require citizens to hand over even more personal data to largely foreign-headquartered social media platforms, government minister Nadine Dorries has declared.

“The vast majority of social networks used in the UK do not require people to share any personal details about themselves – they are able to identify themselves by a nickname, alias or other term not linked to a legal identity,” said Dorries, Secretary of State for Digital, Culture, Media and Sport (DCMS).

Another legal duty to be imposed on social media platforms will be a requirement to give users a “block” button, something that has been part of most of today’s platforms since their launch.

“When it comes to verifying identities,” said DCMS in a statement, “some platforms may choose to provide users with an option to verify their profile picture to ensure it is a true likeness. Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify.”

“Alternatively,” continued the statement, “verification could include people using a government-issued ID such as a passport to create or update an account.”

Two-factor authentication is a login technology to prevent account hijacking by malicious people, not a method of verifying a user’s government-approved identity.

“People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms,” said Dorries.

Social networks offering services to Britons don’t currently require lots of personal data to register as a user. Most people see this as a benefit; the government seems to see it as a negative.

Today’s statement had led to widespread concerns that DCMS will place UK residents at greater risk of online identity theft or of falling victim to a data breach.

The Online Safety Bill was renamed from the Online Harms Bill shortly before its formal introduction to Parliament. Widely accepted as a disaster in the making by the technically literate, critics have said the bill risks creating an “algorithm-driven censorship future” through new regulations that would make it legally risky for platforms not to proactively censor users’ posts.

It is also closely linked to strong rhetoric discouraging end-to-end encryption rollouts for the sake of “minors”, and its requirements would mean that tech platforms attempting to comply would have to weaken security measures.

Parliamentary efforts at properly scrutinising the draft bill then led to the “scrutineers” instead publishing a manifesto asking for even more stronger legal weapons be included.

[…]

Source: Online Safety Bill to require more data to use social media

EU Data Watchdog Calls for Total Ban of Pegasus Spyware

Israeli authorities say it should be probed and U.S. authorities are calling for it to be sanctioned, but EU officials have a different idea for how to handle Pegasus spyware: just ban that shit entirely.

That’s the main takeaway from a new memo released by EPDS, the Union’s dedicated data watchdog on Tuesday, noting that a full-on ban across the entire region is the only appropriate response to the “unprecedented risks” the tech poses—not only to people’s devices but “to democracy and the rule of law.”

“As the specific technical characteristics of spyware tools like Pegasus make control over their use very difficult, we have to rethink the entire existing system of safeguards established to protect our fundamental rights and freedoms,” the report reads. “Pegasus constitutes a paradigm shift in terms of access to private communications and devices. This fact makes its use incompatible with our democratic values.”

A “paradigm shift” is a good way to describe the tool, which has been used to target a mounting number of civic actors, activists, and political figures from around the globe, including some notable figures from inside the EU. This past summer, local outlets reported that French president Emmanuel Macron surfaced among the list of potential targets that foreign actors had planned to target with the software, and later reports revealed traces of the tech appearing on phones from Macron’s current staffers. Officials from other EU member states like Hungary and Spain have also reported the tech on their devices, and Poland became the latest member to join the list last month when a team of researchers found the spyware being used to surveil three outspoken critics of the Polish government.

[…]

Source: EU Data Watchdog Calls for Total Ban of Pegasus Spyware

Is Microsoft Stealing People’s Bookmarks, passwords, ID / passport numbers without consent?

received email from two people who told me that Microsoft Edge enabled synching without warning or consent, which means that Microsoft sucked up all of their bookmarks. Of course they can turn synching off, but it’s too late.

Has this happened to anyone else, or was this user error of some sort? If this is real, can some reporter write about it?

(Not that “user error” is a good justification. Any system where making a simple mistake means that you’ve forever lost your privacy isn’t a good one. We see this same situation with sharing contact lists with apps on smartphones. Apps will repeatedly ask, and only need you to accidentally click “okay” once.)

EDITED TO ADD: It’s actually worse than I thought. Edge urges users to store passwords, ID numbers, and even passport numbers, all of which get uploaded to Microsoft by default when synch is enabled.

Source: Is Microsoft Stealing People’s Bookmarks? – Schneier on Security

Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did

Crisis Text Line, one of the nation’s largest nonprofit support options for the suicidal, is in some hot water. A Politico report last week highlighted how the company has been caught collecting and monetizing the data of callers… to create and market customer service software. More specifically, Crisis Text Line says it “anonymizes” some user and interaction data (ranging from the frequency certain words are used, to the type of distress users are experiencing) and sells it to a for-profit partner named Loris.ai. Crisis Text Line has a minority stake in Loris.ai, and gets a cut of their revenues in exchange.

As we’ve seen in countless privacy scandals before this one, the idea that this data is “anonymized” is once again held up as some kind of get out of jail free card:

“Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable.”

But as we’ve noted more times than I can count, “anonymized” is effectively a meaningless term in the privacy realm. Study after study after study has shown that it’s relatively trivial to identify a user’s “anonymized” footprint when that data is combined with a variety of other datasets. For a long time the press couldn’t be bothered to point this out, something that’s thankfully starting to change.

[…]

Source: Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did | Techdirt

Google adds new opt out tracking for Workspace Customers

[…]

according to a new FAQ posted on Google’s Workplace administrator forum. At the end of that month, the company will be adding a new feature—“Workspace search history”—that can continue to track these customers, even if they, or their admins, turn activity tracking off.

The worst part? Unlike Google’s activity trackers that are politely defaulted to “off” for all users, this new Workplace-specific feature will be defaulted to “on,” across Workspace apps like Gmail, Google Drive, Google Meet, and more.

[…]

Luckily, they can turn this option off if they want to, the same way they could turn off activity settings until now. According to Google, the option to do so will be right on the “My Activity” page once the feature goes live, right alongside the current options to flip off Google’s ability to keep tabs on their web activity, location history, and YouTube history. On this page, Google says the option to turn off Workspace history will be located on the far lefthand side, under the “Other Google Activity” tab.

[…]

Source: Google Makes Opting Out Harder for Workspace Customers

LG Announces New Ad Targeting Features for TVs – wait, wtf, I bought my TV, not a service!

[… ]

there are plenty of cases where you throw down hundreds of dollars for a piece of hardware and then you end up being the product anyway. Case in point: TVs.

On Wednesday, the television giant LG announced a new offering to advertisers that promises to be able to reach the company’s millions of connected devices in households across the country, pummeling TV viewers with—you guessed it—targeted ads. While ads playing on your connected TV might not be anything new, some of the metrics the company plans to hand over to advertisers include targeting viewers by specific demographics, for example, or being able to tie a TV ad view to someone’s in-store purchase down the line.

If you swap out a TV screen for a computer screen, the kind of microtargeting that LG’s offering doesn’t sound any different than what a company like Facebook or Google would offer. That’s kind of the point.

[…]

Aside from being an eyesore that literally no TV user wants, these ads come bundled with their own privacy issues, too. While the kinds of invasive tracking and targeting that regularly happens with the ads on your Facebook feed or Google search results are built off of more than a decade’s worth of infrastructure, those in the connected television (or so-called “CTV”) space are clearly catching up, and catching up fast. Aside from what LG’s offering, there are other players in adtech right now that offer ways to connect your in-app activity to what you watch on TV, or the billboards you walk by with what you watch on TV. For whatever reason, this sort of tech largely sidesteps the kinds of privacy snafus that regulators are trying to wrap their heads around right now—regulations like CPRA and GDPR are largely designed to handle your data is handled on the web, not on TV.

[…]

The good news is that you have some sort of refuge from this ad-ridden hell, though it does take a few extra steps. If you own a smart TV, you can simply not connect it to the internet and use another device—an ad-free set-top box like an Apple TV, for instance—to access apps. Sure, a smart TV is dead simple to use, but the privacy trade-offs might wind up being too great.

Source: LG Announces New Ad Targeting Features for TVs

How normal am I? – Let an AI judge you

This is an art project by Tijmen Schep that shows how face detection algoritms are increasingly used to judge you. It was made as part of the European Union’s Sherpa research program.

No personal data is sent to our server in any way. Nothing. Zilch. Nada. All the face detection algorithms will run on your own computer, in the browser.

In this ‘test’ your face is compared with that of all the other people who came before you. At the end of the show you can, if you want to, share some anonimized data. That will then be used to re-calculate the new average. That anonymous data is not shared any further.

Source: How normal am I?

How to Download Everything Amazon Knows About You (It’s a Lot)

[…]To be clear, data collection is far from an Amazon-specific problem; it’s pretty much par for the course when it comes to tech companies. Even Apple, a company vocal about user privacy, has faced criticism in the past for recording Siri interactions and sharing them with third-party contractors.

The issue with Amazon, however, is the extent to which they collect and archive your data. Just about everything you do on, with, and around an Amazon product or service is logged and recorded. Sure, you might not be surprised to learn that when you visit Amazon’s website, the company logs your browsing history and shopping data. But it goes far beyond that. Since Amazon owns Whole Foods, it also saves your shopping history there. When you watch video content through its platforms, it records all of that information, too.

Things get even creepier with other Amazon products. If you read books on a Kindle, Amazon records your reading activity, including the speed of your page turns (I wonder if Bezos prefers a slow or fast page flip); if you peered into your Amazon data, you might find something similar to what a Reuter’s reporter found: On Aug. 8 2020, someone on that account read The Mitchell Sisters: A Complete Romance Series from 4:52 p.m. through 7:36 p.m., completing 428 pages. (Nice sprint.)

If you have one of Amazon’s smart speakers, you’re on the record with everything you’ve ever uttered to the device: When you ask Alexa a question or give it a command, Amazon saves the audio files for the entire interaction. If you know how to access you data, you can listen to every one of those audio files, and relive moments you may or may not have realized were recorded.

Another Reuters reporter found Amazon saved over 90,000 recordings over a three-and-a-half-year period, which included the reporter’s children asking Alexa questions, recordings of those same children apologizing to their parents, and, in some cases, extended conversations that were outside the scope of a reasonable Alexa query.

Unfortunately, while you can access this data, Amazon doesn’t make it possible to delete much of it. You can tweak your privacy settings you stop your devices from recording quite as much information. However, once logged, the main strategy to delete it is to delete the entire account it is associated with. But even if you can’t delete the data while sticking with your account, you do have a right to see what data Amazon has on you, and it’s simple to request.

How to download all of your Amazon data

To start, , or go to Amazon’s Help page. You’ll find the link under Security and Privacy > More in Security & Privacy > Privacy > How Do I Request My Data? Once there, click the “Request My Data” link.

From the dropdown menu, choose the data you want from Amazon. If you want everything, choose “Request All Your Data.” Hit “Submit Request,” then click the validation link in your email. That’s it. Amazon makes it easy to see what the have on you, probably because they know you can’t do anything about it.

[Reuters]

Source: How to Download Everything Amazon Knows About You (It’s a Lot)

German IT security watchdog: No evidence of censorship function in Xiaomi phones

Germany’s federal cybersecurity watchdog, the BSI, did not find any evidence of censorship functions in mobile phones manufactured by China’s Xiaomi Corp (1810.HK), a spokesperson said on Thursday.

Lithuania’s state cybersecurity body had said in September that Xiaomi phones had a built-in ability to detect and censor terms such as “Free Tibet”, “Long live Taiwan independence” or “democracy movement”. The BSI started an examination following these accusations, which lasted several months. read more

“As a result, the BSI was unable to identify any anomalies that would require further investigation or other measures,” the BSI spokesperson said.

Source: German IT security watchdog: No evidence of censorship function in Xiaomi phones | Reuters

This App Will Tell Android Users If an AirTag Is Tracking Them

Apple’s AirTags and Find My service can be helpful for finding things you lose—but they also introduce a big privacy problem. While those of us on iOS have had some tools for fighting those issues, Apple left those of us on Android without much to work with. A new Android AirTag finder app finally addresses some of those concerns.

How AirTags work

[…]

The Find My network employs the passive use of hundreds of millions of Apple devices to help expand your search. That way, you can locate your lost items even if they’re too far away for traditional wireless tracking. Your lost AirTag may be out of your own phone’s Bluetooth range, but it may not be far from another Apple device.

[…]

The Tracker Detect app comes out of a need for better security in the Find My network. Having such a wide network to track a tiny, easy-to-miss device could make it easy for someone to use AirTags to track someone.

People pointed out this vulnerability pretty soon after Apple announced the AirTags. With more than 113 million iPhones in the U.S., not to mention other Apple devices, the Find My network could be one of the widest tracking systems available. A device as small and easy-to-use as an AirTag on that network could make stalking easier than ever.

That said, Apple has a built-in feature designed to prevent tracking. If your iPhone senses that a strange AirTag, separated from its owner, is following you, it will send you an alert. If that AirTag is not found, it will start to make a sound anywhere from 8 to 24 hours after being separated from its owner.

However, Android users haven’t had these protections. That’s where Tracker Detect comes in; with this new Android AirTag app, you can scan the area to see if anyone may be tracking your location with an AirTag or other Find My-enabled accessory.

How to use Tracker Detect

If you’re concerned about people tracking you, download the Tracker Detect app from the Google Play Store. You don’t need an Apple account or any Apple devices to use it.

The app won’t scan automatically, so you’ll have to look for devices manually. To do that, open the app and tap Scan. Apple says it may take up to 15 minutes to find an AirTag that’s separated from its owner. You can tap Stop Scanning to end the search if you feel safe, and if the app detects something, it will mark it as Unknown AirTag.

Once the app has detected an AirTag, you can have it play a sound through the tag for up to ten minutes to help you find it. When you find the AirTag, you can scan it with an NFC reader to learn more about it.

[…]

 

Source: This App Will Tell Android Users If an AirTag Is Tracking Them

Banks, ISPs Increasingly Embrace ‘Voice Print’ Authentication Despite Growing Security Risk

While it’s certainly possible to sometimes do biometrics well, a long line of companies frequently… don’t. Voice print authentication is particularly shaky, especially given the rise of inexpensive voice deepfake technology. But, much like the continued use of text-message two-factor authentication (which is increasingly shown to not be secure), it apparently doesn’t matter to a long list of companies.

Banks and telecom giants alike have started embracing voice authentication tech at significant scale despite the added threat to user privacy and security. And they’re increasingly collecting user “voice print” data without any way to opt out:

“despite multiple high-profile cases of scammers successfully stealing money by impersonating people via deepfake audio, big banks and ISPs are rolling out voice-based authentication at scale. The worst offender that I could find is Chase. There is no “opt in”. There doesn’t even appear to be a formal way to “opt out”! There is literally no way for me to call my bank without my voice being “fingerprinted” without my consent.”

[…]

Source: Banks, ISPs Increasingly Embrace ‘Voice Print’ Authentication Despite Growing Security Risk | Techdirt

Apple Removes All References to Controversial CSAM Scanning Feature – where they would scan all the pictures you took

Apple has quietly nixed all mentions of CSAM from its Child Safety webpage, suggesting its controversial plan to detect child sexual abuse images on iPhones and iPads may hang in the balance following significant criticism of its methods.

Apple in August announced a planned suite of new child safety features, including scanning users’ iCloud Photos libraries for Child Sexual Abuse Material (CSAM), Communication Safety to warn children and their parents when receiving or sending sexually explicit photos, and expanded CSAM guidance in Siri and Search.

Following their announcement, the features were criticized by a wide range of individuals and organizations, including security researchers, the privacy whistleblower Edward Snowden, the Electronic Frontier Foundation (EFF), Facebook’s former security chief, politicians, policy groups, university researchers, and even some Apple employees.

The majority of criticism was leveled at Apple’s planned on-device CSAM detection, which was lambasted by researchers for relying on dangerous technology that bordered on surveillance, and derided for being ineffective at identifying images of child sexual abuse.

[…]

Source: Apple Removes All References to Controversial CSAM Scanning Feature From Its Child Safety Webpage [Updated] – MacRumors

Report: VPNs Are Often a Mixed Bag for Privacy

[…] Consumer Reports, which recently published a 48-page white paper on VPNs that looks into the privacy and security policies of 16 prominent VPN providers. Researchers initially looked into some 51 different companies but ultimately honed in on the most prominent, high-quality providers. The results are decidedly mixed, with the report highlighting a lot of the long offered criticisms of the industry—namely, it’s lack of transparency, its PR bullshit, and its not always stellar security practices. On the flip side, a small coterie of VPNs actually seem pretty good.

[…]

. Consumers may often believe that by using a VPN they are able to become completely invisible online, as companies promise stuff like “unrivaled internet anonymity,” and the ability to “keep your browsing private and protect yourself from hackers and online tracking,” and so on and so forth.

In reality, there are still a whole variety of ways that companies and advertisers can track you across the internet—even if your IP address is hidden behind a virtual veil.

[…]

via a tool developed by a group of University of Michigan researchers, dubbed the “VPNalyzer” test suite, which was able to look at various security issues with VPN connections. The research team found that “malicious and deceptive behaviors by VPN providers such as traffic interception and manipulation are not widespread but are not nonexistent. In total, the VPNalyzer team filed more than 29 responsible disclosures, 19 of which were for VPNs also studied in this report, and is awaiting responses regarding its findings.”

The CR’s own analysis found “little evidence” of VPNs “manipulating users’ networking traffic when testing for evidence of TLS interception,” though they did occasionally run into examples of data leakage.

And, as should hopefully go without saying, any VPN with the word “free” near it should be avoided at all costs, lest you accidentally download some sort of Trojan onto your device and casually commit digital hari-kari.

[…]

According to CR’s review, four VPN providers rose to the top of the list in terms of their privacy and security practices. They were:

Apparently in that order.

These companies stood out mostly by not over-promising what they could deliver, while also scoring high on scales of transparency and security

[…]

Source: Report: VPNs Are Often a Mixed Bag for Privacy

Prisons snoop on inmates’ phone calls with speech-to-text AI

Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners’ phone calls.

In a sales pitch, LEO’s CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

“(The) sheriff believes (the calls) will help him fend off pending liability via civil action from inmates and activists,” Sexton said. Verus transcribes phone calls and finds certain keywords discussing issues like COVID-19 outbreaks or other complaints about jail conditions.

Prisoners, however, said the tool was used to catch crime. In one case, it allegedly found one inmate illegally collecting unemployment benefits. But privacy advocates aren’t impressed. “T​​he ability to surveil and listen at scale in this rapid way – it is incredibly scary and chilling,” said Julie Mao, deputy director at Just Futures Law, an immigration legal group.

[…]

Source: Prisons snoop on inmates’ phone calls with speech-to-text AI • The Register

Executive at Swiss Tech Company Said to Operate Secret Surveillance Operation

The co-founder of a company that has been trusted by technology giants including Google and Twitter to deliver sensitive passwords to millions of their customers also operated a service that ultimately helped governments secretly surveil and track mobile phones, Bloomberg reported Monday, citing former employees and clients. From the report: Since it started in 2013, Mitto AG has established itself as a provider of automated text messages for such things as sales promotions, appointment reminders and security codes needed to log in to online accounts, telling customers that text messages are more likely to be read and engaged with than emails as part of their marketing efforts. Mitto, a closely held company with headquarters in Zug, Switzerland, has grown its business by establishing relationships with telecom operators in more than 100 countries. It has brokered deals that gave it the ability to deliver text messages to billions of phones in most corners of the world, including countries that are otherwise difficult for Western companies to penetrate, such as Iran and Afghanistan. Mitto has attracted major technology giants as customers, including Google, Twitter, WhatsApp, Microsoft’s LinkedIn and messaging app Telegram, in addition to China’s TikTok, Tencent and Alibaba, according to Mitto documents and former employees.

But a Bloomberg News investigation, carried out in collaboration with the London-based Bureau of Investigative Journalism, indicates that the company’s co-founder and chief operating officer, Ilja Gorelik, was also providing another service: selling access to Mitto’s networks to secretly locate people via their mobile phones. That Mitto’s networks were also being used for surveillance work wasn’t shared with the company’s technology clients or the mobile operators Mitto works with to spread its text messages and other communications, according to four former Mitto employees. The existence of the alternate service was known only to a small number of people within the company, these people said. Gorelik sold the service to surveillance-technology companies which in turn contracted with government agencies, according to the employees.

Source: Executive at Swiss Tech Company Said to Operate Secret Surveillance Operation – Slashdot

Life360 Reportedly Sells Location Data of Families and Kids

Life360, a popular tracking app that bills itself as “the world’s leading family safety service,” is purportedly selling location data on the 31 million families and kids that use it to data brokers. The chilling revelation may make users of the Tile Bluetooth tracker, which is being bought by Life360, think twice before continuing to use the device.

Life360’s data selling practices were revealed in a damning report published by the Markup on Monday. The report claims that Life360 sells location data on its users to roughly a dozen data brokers, some of which have sold data to U.S. government contractors. The data brokers then proceed to sell the location data to “virtually anyone who wants to buy it.” Life360 is purportedly one of the largest sources of data for the industry, the outlet found.

While selling location data on families and kids is already alarming, what’s even more frightening is that Life360 is purportedly failing to take steps to protect the privacy of the data it sells. This could potentially allow the location data, which the company says is anonymized, to be linked back to the people it belongs to.

[…]

Source: Life360 Reportedly Sells Location Data of Families and Kids

Documents Shows Just How Much The FBI Can Obtain From Encrypted Communication Services

There is no “going dark.” Consecutive FBI heads may insist there is, but a document created by their own agency contradicts their dire claims that end-to-end encryption lets the criminals and terrorists win.

Andy Kroll has the document and the details for Rolling Stone:

[I]n a previously unreported FBI document obtained by Rolling Stone, the bureau claims that it’s particularly easy to harvest data from Facebook’s WhatsApp and Apple’s iMessage services, as long as the FBI has a warrant or subpoena. Judging by this document, “the most popular encrypted messaging apps iMessage and WhatsApp are also the most permissive,” according to Mallory Knodel, the chief technology officer at the Center for Democracy and Technology.

The document [PDF] shows what can be obtained from which messaging service, with the FBI noting WhatsApp has plenty of information investigators can obtain, including almost real time collection of communications metadata.

WhatsApp will produce certain user metadata, though not actual message content, every 15 minutes in response to a pen register, the FBI says. The FBI guide explains that most messaging services do not or cannot do this and instead provide data with a lag and not in anything close to real time: “Return data provided by the companies listed below, with the exception of WhatsApp, are actually logs of latent data that are provided to law enforcement in a non-real-time manner and may impact investigations due to delivery delays.”

The FBI can obtain this info with a pen register order — the legal request used for years to obtain ongoing call data on targeted numbers, including numbers called and length of conversations. With a warrant, the FBI can get even more information. A surprising amount, actually. According to the document, WhatsApp turns over address book contacts for targeted users as well as other WhatsApp users who happen to have the targeted person in their address books.

Combine this form of contact chaining with a few pen register orders, and the FBI can basically eavesdrop on hundreds of conversations in near-real time. The caveat, of course, is that the FBI has no access to the content of the conversations. That remains locked up by WhatsApp’s encryption. Communications remain “warrant-proof,” to use a phrase bandied about by FBI directors. But is it really?

If investigators are able to access the contents of a phone (by seizing the phone or receiving permission from someone to view their end of conversations), encryption is no longer a problem. That’s one way to get past the going darkness. Then there’s stuff stored in the cloud, which can give law enforcement access to communications despite the presence of end-to-end encryption. Backups of messages might not be encrypted and — as the document points out — a warrant will put those in the hands of law enforcement.

If target is using an iPhone and iCloud backups enabled, iCloud returns may contain WhatsApp data, to include message content.

[…]

Source: Documents Shows Just How Much The FBI Can Obtain From Encrypted Communication Services | Techdirt

Qualcomm’s new always-on smartphone camera is always looking out for you

“Your phone’s front camera is always securely looking for your face, even if you don’t touch it or raise to wake it.” That’s how Qualcomm Technologies vice president of product management Judd Heape introduced the company’s new always-on camera capabilities in the Snapdragon 8 Gen 1 processor set to arrive in top-shelf Android phones early next year.

[…]

But for those of us with any sense of how modern technology is used to violate our privacy, a camera on our phone that’s always capturing images even when we’re not using it sounds like the stuff of nightmares and has a cost to our privacy that far outweighs any potential convenience benefits.

Qualcomm’s main pitch for this feature is for unlocking your phone any time you glance at it, even if it’s just sitting on a table or propped up on a stand. You don’t need to pick it up or tap the screen or say a voice command — it just unlocks when it sees your face. I can see this being useful if your hands are messy or otherwise occupied (in its presentation, Qualcomm used the example of using it while cooking a recipe to check the next steps). Maybe you’ve got your phone mounted in your car, and you can just glance over at it to see driving directions without having to take your hands off the steering wheel or leave the screen on the entire time.

[…]

Qualcomm is framing the always-on camera as similar to the always-on microphones that have been in our phones for years. Those are used to listen for voice commands like “Hey Siri” or “Hey Google” (or lol, “Hi Bixby”) and then wake up the phone and provide a response, all without you having to touch or pick up the phone. But the difference is that they are listening for specific wake words and are often limited with what they can do until you do actually pick up your phone and unlock it.

It feels a bit different when it’s a camera that’s always scanning for a likeness.

It’s true that smart home products already have features like this. Google’s Nest Hub Max uses its camera to recognize your face when you walk up to it and greet you with personal information like your calendar. Home security cameras and video doorbells are constantly on, looking for activity or even specific faces. But those devices are in your home, not always carried with you everywhere you go, and generally don’t have your most private information stored on them, like your phone does. They also frequently have features like physical shutters to block the camera or intelligent modes to disable recording when you’re home and only resume it when you aren’t. It’s hard to imagine any phone manufacturer putting a physical shutter on the front of their slim and sleek flagship smartphone.

Lastly, there have been many reports of security breaches and social engineering hacks to enable smart home cameras when they aren’t supposed to be on and then send that feed to remote servers, all without the knowledge of the homeowner. Modern smartphone operating systems now do a good job of telling you when an app is accessing your camera or microphone while you’re using the device, but it’s not clear how they’d be able to inform you of a rogue app tapping into the always-on camera.

To be honest, these things are also pretty damn scary! I understand that Americans have been habituated to ubiquitous surveillance, but here in the EU we still value our privacy and don’t like it much at all.

Ultimately, it comes down to a level of trust — do you trust that Qualcomm has set up the system in a way that prevents the always-on camera from being used for other purposes than intended? Do you trust that the OEM using Qualcomm’s chips won’t do things to interfere with the system, either for their own profit or to satisfy the demands of a government entity?

Even if you do have that trust, there’s a certain level of comfort with an always-on camera on your most personal device that goes beyond where we are currently.

Maybe we’ll just start having to put tape on our smartphone cameras like we already do with laptop webcams.

Source: Qualcomm’s new always-on smartphone camera is a potential privacy nightmare – The Verge