The Linkielist

Linking ideas with the world

The Linkielist

Cell Phone Location Privacy could be done easily

We all know that our cell phones constantly give our location away to our mobile network operators; that’s how they work. A group of researchers has figured out a way to fix that. “Pretty Good Phone Privacy” (PGPP) protects both user identity and user location using the existing cellular networks. It protects users from fake cell phone towers (IMSI-catchers) and surveillance by cell providers.

It’s a clever system. The players are the user, a traditional mobile network operator (MNO) like AT&T or Verizon, and a new mobile virtual network operator (MVNO). MVNOs aren’t new. They’re intermediaries like Cricket and Boost.

Here’s how it works:

  1. One-time setup: The user’s phone gets a new SIM from the MVNO. All MVNO SIMs are identical.
  2. Monthly: The user pays their bill to the MVNO (credit card or otherwise) and the phone gets anonymous authentication (using Chaum blind signatures) tokens for each time slice (e.g., hour) in the coming month.
  3. Ongoing: When the phone talks to a tower (run by the MNO), it sends a token for the current time slice. This is relayed to a MVNO backend server, which checks the Chaum blind signature of the token. If it’s valid, the MVNO tells the MNO that the user is authenticated, and the user receives a temporary random ID and an IP address. (Again, this is now MVNOs like Boost already work.)
  4. On demand: The user uses the phone normally.

The MNO doesn’t have to modify its system in any way. The PGPP MVNO implementation is in software. The user’s traffic is sent to the MVNO gateway and then out onto the Internet, potentially even using a VPN.

All connectivity is data connectivity in cell networks today. The user can choose to be data-only (e.g., use Signal for voice), or use the MVNO or a third party for VoIP service that will look just like normal telephony.

The group prototyped and tested everything with real phones in the lab. Their approach adds essentially zero latency, and doesn’t introduce any new bottlenecks, so it doesn’t have performance/scalability problems like most anonymity networks. The service could handle tens of millions of users on a single server, because it only has to do infrequent authentication, though for resilience you’d probably run more.

The paper is here.

Source: Cell Phone Location Privacy | OSINT

I checked Apple’s new privacy ‘nutrition labels.’ Many were false.

[…]

Apple only lets you access iPhone apps through its own App Store, which it says keeps everything safe. It appeared to bolster that idea when it announced in 2020 that it would ask app makers to fill out what are essentially privacy nutrition labels. Just like packaged food has to disclose how much sugar it contains, apps would have to disclose in clear terms how they gobble your data. The labels appear in boxes toward the bottom of app listings. (Click here for my guide on how to read privacy nutrition labels.)

But after I studied the labels, the App Store is now a product I trust less to protect us. In some ways, Apple uses a narrow definition of privacy that benefits Apple — which has its own profit motivations — more than it benefits us.

Apple’s big privacy product is built on a shaky foundation: the honor system. In tiny print on the detail page of each app label, Apple says, “This information has not been verified by Apple.”

The first time I read that, I did a double take. Apple, which says caring for our privacy is a “core responsibility,” surely knows devil-may-care data harvesters can’t be counted on to act honorably. Apple, which made an estimated $64 billion off its App Store last year, shares in the responsibility for what it publishes.

It’s true that just by asking apps to highlight data practices, Apple goes beyond Google’s rival Play Store for Android phones. It has also promised to soon make apps seek permission to track us, which Facebook has called an abuse of Apple’s monopoly over the App Store.

In an email, Apple spokeswoman Katie Clark-AlSadder said: “Apple conducts routine and ongoing audits of the information provided and we work with developers to correct any inaccuracies. Apps that fail to disclose privacy information accurately may have future app updates rejected, or in some cases, be removed from the App Store entirely if they don’t come into compliance.”

My spot checks suggest Apple isn’t being very effective.

And even when they are filled out correctly, what are Apple’s privacy labels allowing apps to get away with not telling us?

Trust but verify

A tip from a tech-savvy Washington Post reader helped me realize something smelled fishy. He was using a journaling app that claimed not to collect any data but, using some technical tools, he spotted it talking an awful lot to Google.

[…]

To be clear, I don’t know exactly how widespread the falsehoods are on Apple’s privacy labels. My sample wasn’t necessarily representative: There are about 2 million apps, and some big companies, like Google, have yet to even post labels. (They’re only required to do so with new updates.) About 1 in 3 of the apps I checked that claimed they took no data appeared to be inaccurate. “Apple is the only one in a position to do this on all the apps,” says Jackson.

But if a journalist and a talented geek could find so many problems just by kicking over a few stones, why isn’t Apple?

Even after I sent it a list of dubious apps, Apple wouldn’t answer my specific questions, including: How many bad apps has it caught? If being inaccurate means you get the boot, why are some of the ones I flagged still available?

[…]

We need help to fend off the surveillance economy. Apple’s App Store isn’t doing enough, but we also have no alternative. Apple insists on having a monopoly in running app stores for iPhones and iPads. In testimony to Congress about antitrust concerns last summer, Apple CEO Tim Cook argued that Apple alone can protect our security.

Other industries that make products that could harm consumers don’t necessarily get to write the rules for themselves. The Food and Drug Administration sets the standards for nutrition labels. We can debate whether it’s good at enforcement, but at least when everyone has to work with the same labels, consumers can get smart about reading them — and companies face the penalty of law if they don’t tell the truth.

Apple’s privacy labels are not only an unsatisfying product. They should also send a message to lawmakers weighing whether the tech industry can be trusted to protect our privacy on its own.

Source: I checked Apple’s new privacy ‘nutrition labels.’ Many were false.

r/wallstreetbets: hostile takeover by old mods trying to monetise and push down GME price. Go to r/wallstreetbetstest and r/wallstreetbetsnew now

https://www.reddit.com/r/wallstreetbetstest/comments/lcjcvm/update_i_just_got_removed_as_a_moderator_on/

I was confused, annoyed and sad trying to understand what had happened. I was removed by the senior moderator at r/wallstreetbets who is u/turdled . I messaged him asking for an explanation, but have still not been given one. It was at this same time that several other moderators were removed and getting banned left and right. I had some of my posts removed as well.

I was also starting to receive chat requests and messages from people seeing u/zjz‘s post and asking what was going on, and accusing me of being a rogue/plant mod.

I’ve been looking around the accounts of the mods of the new subreddit and these are indeed the old mods.

Find the new site that is not infested by people trying to short GME here:  https://www.reddit.com/r/wallstreetbetstest

Also here https://www.reddit.com/r/Wallstreetbetsnew/

NB r/wallstreetbetsnew seems to be the Gamestonk holdout with the memes. r/wallstreetbetstest is where the “real” wsb crowds who aren’t solely obsessed with GME are hanging around.

More info: WallStreetBets Mods Are Now Battling For Control Over The Subreddit

If you want to know about the dark history and why the founder was kicked out, read here

tl;dr on tl;dr: Founder bad, greedy, got banned for being greedy. Being greedy again with new spotlight on the sub.

tl;dr, in 2020 the original founder (after being gone for years and did nothing to contribute to the sub), along with a couple of mods, attempted to monetize the sub for personal gains. Users and other mods fought back. Hundreds of users got mass banned for speaking out, mods who spoke out got removed as mods. With some help from users, mods found precedent of another sub creator getting banned for trying to monetize a sub and sent plea to Reddit admins. Reddit admins banned offenders and gave sub back to the good mods.

u/SpeaksInBooleans (RIP) investigated the circumstance of the events and made video exposing the offenders:

Part 1

Part 2

Part 3

Part 4

Part 5

Part 6

Mega thread after the victory for reference.

It’s important to know/remember this now, because the same person that got exiled for being a tyrant is doing a media circus, trying to ride the current spotlight for personal gain, again. Hey CNN and WSJ, stop interviewing that dipshit. The sub has always been about its people, and what you guys wanted to do (as retarded as you are). No single person speaks for the sub and controls its destiny. It is in good hands with u/zjz aka u/SwineFluPandemic

India’s government threatens to jail Twitter employees unless they block critics

India’s government has warned Twitter that it must obey its orders to remove “inflammatory content” or employees will face potential jail time, Buzzfeed has reported. The government, under Prime Minister Narendra Modi, made the edict after Twitter unblocked 257 accounts criticizing Modi’s government around farmer protests, after initially blocking them.

The accounts in question come from government opposition leaders, investigative journalism site The Caravan, along with other critics, journalists and writers. Some used the hashtag #ModiPlanningFarmerGenocide, referencing controversial proposed laws that farmers have said will reduce their income and make them more reliant on corporations.

After initially blocking the accounts, Twitter reversed its decision, saying the tweets constituted free speech and were newsworthy. In response, the IT ministry ordered them blocked again. “Twitter is an intermediary and they are obliged to obey directions of the government. Refusal to do so will invite penal action,” it told Twitter in a notice. It added that the hashtag was being used to “abuse, inflame and create tension in society on unsubstantiated grounds.”

The Caravan, which didn’t use the hashtag, said it was merely doing its job. “We don’t understand why suddenly the Indian government finds journalists should not speak to all sides of an issue,” executive editor Vinod K. Jose, told BuzzFeed News. “This is really problematic,” added internet activist and MediaNama editor Nikhail Pahwa.

Modi’s government was also incensed by western celebrities including Rhianna and Greta Thunberg who tweeted their support. Some Modi supporters railed against the tweets, including Bollywood actor Kangana Ranaut. “No one is talking about it because they are not farmers, they are terrorists who are trying to divide India,” she wrote.

The latest development means Twitter, once again, must choose to either protect its employees and commercials interests, or be accused of aiding censorship in a volatile political situation. However, it may be forced to comply due to India’s IT laws that force social media platforms to remove “any information generated, transmitted, received, stored or hosted in any computer resource” that could affect “public order.”

Source: India’s government threatens to jail Twitter employees unless they block critics | Engadget

How to Restore Recently Deleted Instagram Posts – because deleted means: stored somewhere you can’t get at them

Instagram is adding a new “Recently deleted” folder to the app’s menu that temporarily stores posts after you remove them from your profile or archive, giving you the ability to restore deleted posts if you change your mind.

The folder includes sections for photos, IGTV, Reels, and Stories posts. No one else can see your recently deleted posts, but as long as a photo or video is still in the folder, it can be restored. Regular photos, IGTV videos, and Reels remain in the folder for up to 30 days, after which they’re gone forever. Stories stick around for up to 24 hours before they’re permanently removed, but you can still access them in your Stories archive.

[…]

Source: How to Restore Recently Deleted Instagram Posts

It’s nice how they’re framing the fact that they don’t delete your data as a “feature”

Amazon Plans to Install Creepy Always-On Surveillance Cameras in Delivery Vans

Not content to only wield its creepy surveillance infrastructure against warehouse workers and employees considering unionization, Amazon is reportedly gearing up to install perpetually-on cameras inside its fleet of delivery vehicles as well.

A new report from The Information claims that Amazon recently shared the plans in an instructional video sent out to the contractor workers who drive the Amazon-branded delivery vans.

In the video, the company reportedly explains to drivers that the high-tech video cameras will use artificial intelligence to determine when drivers are engaging in risky behavior, and will give out verbal warnings including “Distracted driving,” “No stop detected” and “Please slow down.”

According to a video posted to Vimeo a week ago, the hardware and software for the cameras will be provided through a partnership with California-based company Netradyne, which is also responsible for a platform called Driveri that similarly uses artificial intelligence to analyze a driver’s behavior as they operate a vehicle.

While the camera’s automated feedback will be immediate, other data will also reportedly be stored for later analysis that will help the company to evaluate its fleet of drivers.

Although it’s not clear when Amazon plans to install the cameras or how many of the vehicles in the company’s massive fleet will be outfitted with them, the company told The Information in a statement that the software will be implemented in the spirit of increasing safety precautions and not, you know, bolstering an insidious and growing surveillance apparatus.

Source: Amazon Plans to Install Always-On Surveillance Cameras in Delivery Vans

Synology to enforce use of validated disks in enterprise NAS boxes. And guess what? Only its own disks exceed 4TB

Synology has introduced its first-ever list of validated disks and won’t allow other devices into its enterprise-class NAS devices. And in a colossal coincidence, half of the disks allowed into its devices – and the only ones larger than 4TB – are Synology’s very own HAT 5300 disks that it launched last week.

Seeing as privately held Synology is thought to have annual revenue of around US$350m, rather less than the kind of cash required to get into the hard disk business, The Register inquired if it had really started making drives or found some other way into the industry.

The Taiwanese network-attached-storage vendor told us the drives are Synology-branded Toshiba kit, though it has written its own drive firmware and that the code delivers sequential read performance 23 per cent beyond comparable drives. Synology told us its branded disks will also be more reliable because they have undergone extensive testing in the company’s own NAS arrays.

[…]

So to cut a long story short, if you want to get the most out of Synology NAS devices, you’ll need to buy Synology’s own SATA hard disk drives.

The new policy applies as of the release of three new Synology NAS appliances intended for enterprise use and will be applied to other models over time.

The new models include the RS3621RPxs, which sports an unspecified six-core Intel Xeon processor and can handle a dozen drives, then move data over four gigabit Ethernet ports. The middle-of-the-road RS3621xs+ offers an eight-core Xeon and adds two 10GE ports. At the top of the range, the RS4021xs+ stretches to 3U and adds 16GB of RAM, eight more than found in the other two models.

[…]

Source: Synology to enforce use of validated disks in enterprise NAS boxes. And guess what? Only its own disks exceed 4TB • The Register

I guess HDD vendor lock in is a really really good reason to not buy Synology then.

ProtonMail, Tutanota among authors of letter urging EU to reconsider encryption rules

Encrypted service providers are urging lawmakers to back away from a controversial plan that critics say would undercut effective data protection measures.

ProtonMail, Threema, Tresorit and Tutanota — all European companies that offer some form of encrypted services — issued a joint statement this week declaring that a resolution the European Council adopted on Dec. 14 is ill-advised. That measure calls for “security through encryption and security despite encryption,” which technologists have interpreted as a threat to end-to-end encryption. In recent months governments around the world, including the U.S., U.K., Australia, New Zealand, Canada, India and Japan, have been reigniting conversations about law enforcement officials’ interest in bypassing encryption, as they have sporadically done for years.

In a letter that will be sent to council members on Thursday, the authors write that the council’s stated goal of endorsing encryption, and the council’s argument that law enforcement authorities must rely on accessing electronic evidence “despite encryption,” contradict one another. The advancement of legislation that forces technology companies to guarantee police investigators a way to intercept user messages, for instance, repeatedly has been scrutinized by technology leaders who argue there is no way to stop such a tool from being abused.

The resolution “will threaten the basic rights of millions of Europeans and undermine a global shift towards adopting end-to-end encryption,” say the companies, which offer users either encrypted email, file-sharing or messaging.

“[E]ncryption is an absolute, data is either encrypted or it isn’t, users have privacy or they don’t,” the letter, which was shared with CyberScoop in advance, states. “The desire to give law enforcement more tools to fight crime is obviously understandable. But the proposals are the digital equivalent of giving law enforcement a key to every citizens’ home and might begin a slippery slope towards greater violations of personal privacy.”

[…]

Source: ProtonMail, Tutanota among authors of letter urging EU to reconsider encryption rules

Firefox 85 removes support for Flash and adds protection against supercookies

Mozilla has released Firefox 85 ending support for Adobe Flash Player plugin and has brought in ways to block supercookies to enhance a user’s privacy. Mozilla, in a blog post, noted that supercookies are store user identifiers, and are much more difficult to delete and block. It further noted that the changes it is making through network partitioning in Firefox 85 will “reduce the effectiveness of cache-based supercookies by eliminating a tracker’s ability to use them across websites.”

“Trackers can abuse caches to create supercookies and can use connection identifiers to track users. But by isolating caches and network connections to the website they were created on, we make them useless for cross-site tracking,” Mozilla noted.

It explained that the network partitioning works by splitting the Firefox browser cache on a per-website basis, a technical solution that prevents websites from tracking users as they move across the web. Mozilla also noted that by removing support for Flash, there was not much impact on the page load time. The development was first reported by ZDNet.

[…]

Source: Firefox 85 removes support for Flash and adds protection against supercookies – Technology News

Fedora’s Chromium maintainer suggests switching to Firefox as Google yanks features in favour of Chrome

Fedora’s maintainer for the open-source Chromium browser package is recommending users consider switching to Firefox following Google’s decision to remove functionality and make it exclusive to its proprietary Chrome browser.The comments refer to a low-key statement Google made just before the release of Chrome 88, saying that during an audit it had “discovered that some third-party Chromium-based browsers were able to integrate Google features, such as Chrome sync and Click to Call, that are only intended for Google’s use… we are limiting access to our private Chrome APIs starting on March 15, 2021.”Tom Callaway (aka “spot”), a former Fedora engineering manager at Red Hat (Fedora is Red Hat’s bleeding-edge Linux distro), who now works for AWS, remarked when describing the Chromium 88 build that: “Google gave the builders of distribution Chromium packages these access rights back in 2013 via API keys, specifically so that we could have open-source builds of Chromium with (near) feature parity to Chrome. And now they’re taking it away.”The reasoning given for this change? Google does not want users to be able to ‘access their personal Chrome Sync data (such as bookmarks)… with a non-Google, Chromium-based browser.’ They’re not closing a security hole, they’re just requiring that everyone use Chrome.”Features in Chromium like data sync depend on Google APIs which are soon to be blockedFeatures in Chromium like data sync depend on Google APIs which are soon to be blockedCallaway predicted that “many (most?) users will be confused/annoyed when API functionality like sync and geolocation stops working for no good reason.” Although API access is not yet blocked, he has disabled it immediately to avoid users experiencing features that suddenly stop working for no apparent reason.He said he is no longer sure of the value of Chromium. “I would say that you might want to reconsider whether you want to use Chromium or not. If you want the full ‘Google’ experience, you can run the proprietary Chrome. If you want to use a FOSS browser that isn’t hobbled, there is a Firefox package in Fedora,” he said.Ahem, just ‘discovered’ this?There is more information about these APIs on the Chromium wiki. Access to the APIs is documented and Google’s claim that it has only just “discovered” this is an oddity. The APIs cover areas including sync, spelling, translation, Google Maps geolocation, Google Cloud Storage, safe browsing, and more.The situation has parallels with Android, where the Android Open Source Project (AOSP) is hard to use as a mobile phone operating system because important functions are reserved for the proprietary Google Play Services. The microG project exists specifically as an attempt to mitigate the absence of these APIs from AOSP.Something similar may now be necessary for Chromium if it is to deliver all the features users have come to expect from a web browser. It is not a problem for companies in a position to provide their own alternative services, such as Microsoft with Chromium-based Edge, but more difficult for Linux distros like Fedora.There are other ways to look at Google’s move, though. “Some people might even consider the removal of this Google-specific functionality an improvement,” commented a Fedora user. Microsoft reportedly removed more than 50 Google-specific services from Chromium as used in Edge, including data sync, safe browsing, maps geolocation, the Google Drive API, and more.Users who choose Chromium over Chrome to avoid Google dependency may not realise the extent of this integration, which is likely now to reduce. The Ungoogled Chromium project not only removes Google APIs but also “blocks internal requests to Google at runtime” as a failsafe measure.

Source: Fedora’s Chromium maintainer suggests switching to Firefox as Google yanks features in favour of Chrome • The Register

Indian government slams Facebook over WhatsApp ‘privacy’ update, wants its own Europe-style opt-out switch

The Indian government has sent a fierce letter to Facebook over its decision to update the privacy rules around its WhatsApp chat service, and asked the antisocial media giant to put a halt to the plans.In an email from the IT ministry to WhatsApp head Will Cathcart, provided to media outlets, the Indian government notes that the proposed changes “raise grave concerns regarding the implications for the choice and autonomy of Indian citizens.”In particular, the ministry is incensed that European users will be given a choice to opt out over sharing WhatsApp data with the larger Facebook empire, as well as businesses using the platform to communicate with customers, while Indian users will not.“This differential and discriminatory treatment of Indian and European users is attracting serious criticism and betrays a lack of respect for the rights and interest of Indian citizens who form a substantial portion of WhatsApp’s user base,” the letter says. It concludes by asking WhatsApp to “withdraw the proposed changes.”IndiaIndia’s top techies form digital foundation to fight Apple and GoogleREAD MOREThe reason that Europe is being treated as a special case by Facebook is, of course, the existence of the GDPR privacy rules that Facebook has repeatedly flouted and as a result faces pan-European legal action.

Source: Indian government slams Facebook over WhatsApp ‘privacy’ update, wants its own Europe-style opt-out switch • The Register

Brave Will Become First Browser To Offer IPFS peer to peer content hosting

On Tuesday, privacy-focused browser Brave released an update that makes it the first to feature peer-to-peer protocol for hosting web content.

Known as IPFS, which stands for InterPlanetary File System, the protocol allows users to load content from a decentralized network of distributed nodes rather than a centralized server. It’s new — and much-heralded — technology, and could eventually supplant the Hypertext Transfer Protocol (HTTP) that dominates our current internet infrastructure.

“We’re thrilled to be the first browser to offer a native IPFS integration with today’s Brave desktop browser release,” said Brian Bondy, CTO and co-founder of Brave. “Integrating the IPFS open-source network is a key milestone in making the Web more transparent, decentralized, and resilient.”

The new protocol promises several inherent advantages over HTTP, with faster web speeds, reduced costs for publishers and a much smaller possibility of government censorship among them.

“Today, Web users across the world are unable to access restricted content, including, for example, parts of Wikipedia in Thailand, over 100,000 blocked websites in Turkey and critical access to COVID-19 information in China,” said IPFS project lead Molly Mackinlay told Engadget. “Now anyone with an internet connection can access this critical information through IPFS on the Brave browser.”

In an email to Vice, IPFS founder Juan Benet said that he finds it concerning that the internet has become as centralized as it has, leaving open the possibility that it could “disappear at any moment, bringing down all the data with them—or at least breaking all the links.”

“Instead,” he continued, “we’re pushing for a fully distributed web, where applications don’t live at centralized servers, but operate all over the network from users’ computers…a web where content can move through any untrusted middlemen without giving up control of the data, or putting it at risk.”

Source: Brave Will Become First Browser To Offer IPFS

How to batch export ALL your WhatsApp chats in one go for non rooted Android on PC

It’s a process that requires quite some installation and some good reading of the instructions but it can be done.

The trick is to install an older version of WhatsApp, extract the key and then copy the message databases. Then you can decrypt the database file and read it using another program. The hardest bit is extracting the key. Once you have that it’s all pretty fast. Apple IOS users have a definite advantage here because they can easily get to the key file.

Here’s my writeup on xda-developers.com

v4.7-E1.0

You need to download WhatsApp-2.11.431.apk and abe-all.jar
Then rename WhatsApp-2.11.431.apk to LegacyWhatsApp.apk and copy it to the tmp/ directory
Rename abe-all.jar to abe.jar and copy it to the bin/ directory

Run the script.

Make sure you enable File transfer mode on the phone after you connect it

Also, I needed to open the old version of WhatsApp before making the backup in the script – fortunately the script waits here for a password! First it wants you to update: don’t! I got a phone date is inaccurate error. Just wait on this screen and then continue on with the script. The script goes silent here for quite some time.

The best instructions are to be found here by PIRATA! but miss the above few steps.

forum.xda-developers.com

[Tool] WhatsApp Key/DB Extractor | CRYPT6-12 | NON-ROOT | UPDATED OCTOBER 2016

** Version 4.7 Updated October 2016 – Supports Android 4.0-7.0 ** SUMMARY: Allows WhatsApp users to extract their cipher key and databases on non-rooted Android devices. UPDATE: This tool was last updated on October 12th 2016. and confirmed… forum.xda-developers.com forum.xda-developers.com
Good luck!

AI upstart stealing facial data told to delete data and algorithms

Everalbum, a consumer photo app maker that shut down on August 31, 2020, and has since relaunched as a facial recognition provider under the name Paravision, on Monday reached a settlement with the FTC over the 2017 introduction of a feature called “Friends” in its discontinued Ever app. The watchdog agency claims the app deployed facial recognition code to organize users’ photos by default, without permission.

According to the FTC, between July 2018 and April 2019, Everalbum told people that it would not employ facial recognition on users’ content without consent. The company allegedly let users in certain regions – Illinois, Texas, Washington, and the EU – make that choice, but automatically activated the feature for those located elsewhere.

The agency further claims that Everalbum’s use of facial recognition went beyond supporting the Friends feature. The company is alleged to have combined users’ faces with facial images from other information to create four datasets that informed its facial recognition technology, which became the basis of a face detection service for enterprise customers.

The company also is said to have told consumers using its app that it would delete their data if they deactivated their accounts, but didn’t do so until at least October 2019.

The FTC, in announcing the case and its settlement, said Everalbum/Paravision will be required to delete: photos and videos belonging to Ever app users who deactivated their accounts; all face embeddings – vector representations of facial features – from users who did not grant consent; and “any facial recognition models or algorithms developed with Ever users’ photos or videos.”

The FTC has not done this in past privacy cases with technology companies. According to FTC Commissioner Rohit Chopra, when Google and YouTube agreed to pay $170m over allegations the companies had collected data from children without parental consent, the FTC settlement “allowed Google and YouTube to profit from its conduct, even after paying a civil penalty.”

Likewise, when the FTC voted to approve a settlement with Facebook over claims it had violated its 2012 privacy settlement agreement, he said, Facebook did not have to give up any of its facial recognition technology or data.

“Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” said Chopra in a statement [PDF]. “This is an important course correction.”

[…]

Source: Privacy pilfering project punished by FTC purge penalty: AI upstart told to delete data and algorithms • The Register

NYPD posts surveillance systems and use and requests comments

Beginning, January 11, 2020, draft surveillance technology impact and use policies will be posted on the Department’s website. Members of the public are invited to review the impact and use policies and provide feedback on their contents. The impact and use policies provide details of: 1) the capabilities of the Department’s surveillance technologies, 2) the rules regulating the use of the technologies, 3) protections against unauthorized access of the technologies or related data, 4) surveillance technologies data retention policies, 5) public access to surveillance technologies data, 6) external entity access to surveillance technologies data, 7) Department trainings in the use of surveillance technologies, 8) internal audit and oversight mechanisms of surveillance technologies, 9) health and safety reporting on the surveillance technologies, and 10) potential disparate impacts of the impact and use policies for surveillance technologies.

Source: Draft Policies for Public Comment

WhatsApp delays enforcement of privacy terms by 3 months, following backlash

WhatsApp said on Friday that it won’t enforce the planned update to its data-sharing policy until May 15, weeks after news about the new terms created confusion among its users, exposed the Facebook app to a potential lawsuit, triggered a nationwide investigation and drove tens of millions of its loyal fans to explore alternative messaging apps.

“We’re now moving back the date on which people will be asked to review and accept the terms. No one will have their account suspended or deleted on February 8. We’re also going to do a lot more to clear up the misinformation around how privacy and security works on WhatsApp. We’ll then go to people gradually to review the policy at their own pace before new business options are available on May 15,” the firm said in a blog post.

Source: WhatsApp delays enforcement of privacy terms by 3 months, following backlash | TechCrunch

I’m pretty sure there is no confusion. People just don’t want all their data shared to Facebook when they were promised it wouldn’t be. So they are leaving to Signal and Telegram.

Turkey launches antitrust probe into WhatsApp and Facebook over the new privacy policy. Still can’t export Whatsapp chats.

WhatsApp updated its privacy policy at the turn of the new year. Users were notified via a popup message upon opening the app that their data would now be shared with Facebook and other companies come February 8. Due to Facebook’s notorious history with user data and privacy, the new update has since then garnered criticism with many people migrating to alternative messaging apps like Signal and Telegram. Microsoft entered the playing field too, recommending users to use Skype in place of the Facebook-owned WhatsApp.

In the latest, Turkey has now launched an antitrust probe into Facebook and WhatsApp regarding the updated privacy policy. Bloomberg reports that:

Turkey’s antitrust board launched an investigation into Facebook Inc. and its messaging service WhatsApp Inc. over new usage terms that have sparked privacy concerns.

[…]

The regulator also said it was halting implementation of such terms, it said on Monday. The new terms would result in “more data being collected, processed and used by Facebook,” according to the statement.

Source: Turkey launches antitrust probe into WhatsApp and Facebook over the new privacy policy – Neowin

Every Deleted Parler Post, Many With Users’ Location Data, Has Been Archived. Parler goes down. Still can’t export your Whatsapp history.

In the wake of the violent insurrection at the U.S. Capitol by scores of President Trump’s supporters, a lone researcher began an effort to catalogue the posts of social media users across Parler, a platform founded to provide conservative users a safe haven for uninhibited “free speech” — but which ultimately devolved into a hotbed of far-right conspiracy theories, unchecked racism, and death threats aimed at prominent politicians.

The researcher, who asked to be referred to by her Twitter handle, @donk_enby, began with the goal of archiving every post from January 6, the day of the Capitol riot; what she called a bevy of “very incriminating” evidence. According to the Atlantic Council’s Digital Forensic Research Lab, among other sources, Parler is one of a several apps used by the insurrections to coordinate their breach of the Capitol, in a plan to overturn the 2020 election results and keep Donald Trump in power.

Five people died in the attempt.

Hoping to create a lasting public record for future researchers to sift through, @donk_enby began by archiving the posts from that day. The scope of the project quickly broadened, however, as it became increasingly clear that Parler was on borrowed time. Apple and Google announced that Parler would be removed from their app stores because it had failed to properly moderate posts that encouraged violence and crime. The final nail in the coffin came Saturday when Amazon announced it was pulling Parler’s plug.

In an email first obtained by BuzzFeed News, Amazon officials told the company they planned to boot it from its clouding hosting service, Amazon Web Services, saying it had witnessed a “steady increase” in violent content across the platform. “It’s clear that Parler does not have an effective process to comply with the AWS terms of service,” the email read.

Operating on little sleep, @donk_enby began the work of archiving all of Parler’s posts, ultimately capturing around 99.9 percent of its content. In a tweet early Sunday, @donk_enby said she was crawling some 1.1 million Parler video URLs. “These are the original, unprocessed, raw files as uploaded to Parler with all associated metadata,” she said. Included in this data tranche, now more than 56 terabytes in size, @donk_enby confirmed that the raw video files include GPS metadata pointing to exact locations of where the videos were taken.

@donk_enby later shared a screenshot showing the GPS position of a particular video, with coordinates in latitude and longitude.

The privacy implications are obvious, but the copious data may also serve as a fertile hunting ground for law enforcement. Federal and local authorities have arrested dozens of suspects in recent days accused of taking part in the Capitol riot, where a Capitol police officer, Brian Sicknick, was fatally wounded after being struck in the head with a fire extinguisher.

[…]

Kirtaner, creator of 420chan — a.k.a. Aubrey Cottle — reported obtaining 6.3 GB of Parler user data from an unsecured AWS server in November. The leak reportedly contained passwords, photos and email addresses from several other companies as well. Parler CEO John Matze later claimed to Business Insider that the data contained only “public information” about users, which had been improperly stored by an email vendor whose contract was subsequently terminated over the leak. (This leak is separate from the debunked claim that Parler was “hacked” in late November, proof of which was determined to be fake.)

In December, Twitter suspended Kirtaner for tweeting, “I’m killing Parler and its fucking glorious,” citing its rules against threatening “violence against an individual or group of people.” Kirtaner’s account remains suspended despite an online campaign urging Twitter’s safety team to reverse its decision. Gregg Housh, an internet activist involved in many early Anonymous campaigns, noted online that the tweet was “not aimed at a person and [was] not actually violent.”

Source: Every Deleted Parler Post, Many With Users’ Location Data, Has Been Archived

ODoH: Cloudflare and Apple design a new privacy-friendly internet protocol for DNS

Engineers at Cloudflare and Apple say they’ve developed a new internet protocol that will shore up one of the biggest holes in internet privacy that many don’t know even exists. Dubbed Oblivious DNS-over-HTTPS, or ODoH for short, the new protocol makes it far more difficult for internet providers to know which websites you visit.

But first, a little bit about how the internet works.

Every time you go to visit a website, your browser uses a DNS resolver to convert web addresses to machine-readable IP addresses to locate where a web page is located on the internet. But this process is not encrypted, meaning that every time you load a website the DNS query is sent in the clear. That means the DNS resolver — which might be your internet provider unless you’ve changed it — knows which websites you visit. That’s not great for your privacy, especially since your internet provider can also sell your browsing history to advertisers.

Recent developments like DNS-over-HTTPS (or DoH) have added encryption to DNS queries, making it harder for attackers to hijack DNS queries and point victims to malicious websites instead of the real website you wanted to visit. But that still doesn’t stop the DNS resolvers from seeing which website you’re trying to visit.

Enter ODoH, which builds on previous work by Princeton academics. In simple terms, ODoH decouples DNS queries from the internet user, preventing the DNS resolver from knowing which sites you visit.

Here’s how it works: ODoH wraps a layer of encryption around the DNS query and passes it through a proxy server, which acts as a go-between the internet user and the website they want to visit. Because the DNS query is encrypted, the proxy can’t see what’s inside, but acts as a shield to prevent the DNS resolver from seeing who sent the query to begin with.

“What ODoH is meant to do is separate the information about who is making the query and what the query is,” said Nick Sullivan, Cloudflare’s head of research.

In other words, ODoH ensures that only the proxy knows the identity of the internet user and that the DNS resolver only knows the website being requested. Sullivan said that page loading times on ODoH are “practically indistinguishable” from DoH and shouldn’t cause any significant changes to browsing speed.

A key component of ODoH working properly is ensuring that the proxy and the DNS resolver never “collude,” in that the two are never controlled by the same entity, otherwise the “separation of knowledge is broken,” Sullivan said. That means having to rely on companies offering to run proxies.

Sullivan said a few partner organizations are already running proxies, allowing for early adopters to begin using the technology through Cloudflare’s existing 1.1.1.1 DNS resolver. But most will have to wait until ODoH is baked into browsers and operating systems before it can be used. That could take months or years, depending on how long it takes for ODoH to be certified as a standard by the Internet Engineering Task Force.

Source: Cloudflare and Apple design a new privacy-friendly internet protocol | TechCrunch

Content Moderation Case Study: SoundCloud Combats Piracy By Giving Universal Music The Power To Remove Uploads (2014)

n most cases, allegedly infringing content is removed at the request of rights holders following the normal DMCA takedown process. A DMCA notice is issued and the site responds by removing the content and — in some cases — allowing the uploader to challenge the takedown.

SoundCloud has positioned itself as a host of user-created audio content, relying on content creators to upload original works. But, like any content hosting site, it often found itself hosting infringing content not created by the uploader.

Realizing the potential for SoundCloud to be overrun with infringing content, the platform became far more proactive as it gained users and funding.

Rather than allow the normal DMCA process to work, SoundCloud allowed one major label to set the terms of engagement. This partnership resulted in Universal being able to unilaterally remove content it believed was infringing without any input from SoundCloud or use of the normal DMCA process.

One user reported his account was closed due to alleged infringement contained in his uploaded radio shows. When he attempted to dispute the removals and the threatened shuttering of his account, he was informed by the platform it was completely out of SoundCloud’s hands.

Your uploads were removed directly by Universal. This means that SoundCloud had no control over it, and they don’t tell us which part of your upload was infringing.

The control of removing content is completely with Universal. This means I can’t tell you why they removed your uploads and not others, and you would really need to ask them that question.

Unfortunately, there was no clear appeal process for disputing the takedown, leaving the user without his account or his uploads.

[…]

SoundCloud continues to allow labels like Universal to perform content removals without utilizing the DMCA process or engaging with the platform directly. Users are still on their own when it comes to content declared infringing by labels. This appears to flow directly from SoundCloud’s long-running efforts to secure licensing agreements with major labels. And that appears to flow directly from multiple threats of copyright litigation from some of the same labels SoundCloud is now partnered with

Source: Content Moderation Case Study: SoundCloud Combats Piracy By Giving Universal Music The Power To Remove Uploads (2014) | Techdirt

And having said that, DMCA is a process that is very very far from perfect and is used to bully smaller players in the market by the big boys with big lawyer pockets.

Whatsapp locks you in – you can’t export all your chats. And now it will share everything with Facebook

Yes, you can back up your database and if you’re rooted, you can find a key to it, but that will give you the chat in database form. You won’t get your pictures and videos etc. You can download them seperately though.

Yes, you can export a single chat / chatgroup, but doing that for the thousands of chats you probably have is not practically possible.

So you’re stuck. Either you share all your data with Facebook or you get rid of your chat history.

Lock down the permissions WhatsApp has – it has way too many!

WhatsApp Has Shared Your Data With Facebook since 2016, actually.

Since Facebook acquired WhatsApp in 2014, users have wondered and worried about how much data would flow between the two platforms. Many of them experienced a rude awakening this week, as a new in-app notification raises awareness about a step WhatsApp actually took to share more with Facebook back in 2016.

On Monday, WhatsApp updated its terms of use and privacy policy, primarily to expand on its practices around how WhatsApp business users can store their communications. A pop-up has been notifying users that as of February 8, the app’s privacy policy will change and they must accept the terms to keep using the app. As part of that privacy policy refresh, WhatsApp also removed a passage about opting out of sharing certain data with Facebook: “If you are an existing user, you can choose not to have your WhatsApp account information shared with Facebook to improve your Facebook ads and products experiences.”

Some media outlets and confused WhatsApp users understandably assumed that this meant WhatsApp had finally crossed a line, requiring data-sharing with no alternative. But in fact the company says that the privacy policy deletion simply reflects how WhatsApp has shared data with Facebook since 2016 for the vast majority of its now 2 billion-plus users.

When WhatsApp launched a major update to its privacy policy in August 2016, it started sharing user information and metadata with Facebook. At that time, the messaging service offered its billion existing users 30 days to opt out of at least some of the sharing. If you chose to opt out at the time, WhatsApp will continue to honor that choice. The feature is long gone from the app settings, but you can check whether you’re opted out through the “Request account info” function in Settings.

Meanwhile, the billion-plus users WhatsApp has added since 2016, along with anyone who missed that opt-out window, have had their data shared with Facebook all this time. WhatsApp emphasized to WIRED that this week’s privacy policy changes do not actually impact WhatsApp’s existing practices or behavior around sharing data with Facebook.

[…]

None of this has at any point impacted WhatsApp’s marquee feature: end-to-end encryption. Messages, photos, and other content you send and receive on WhatsApp can only be viewed on your smartphone and the devices of the people you choose to message with. WhatsApp and Facebook itself can’t access your communications.

[…]

In practice, this means that WhatsApp shares a lot of intel with Facebook, including  account information like your phone number, logs of how long and how often you use WhatsApp, information about how you interact with other users, device identifiers, and other device details like IP address, operating system, browser details, battery health information, app version, mobile network, language and time zone. Transaction and payment data, cookies, and location information are also all fair game to share with Facebook depending on the permissions you grant WhatsApp in the first place.

[…]

Source: WhatsApp Has Shared Your Data With Facebook for Years, Actually | WIRED

If you’re a WhatsApp user, you’ll have to share your personal data with Facebook from next month – and no, you can’t opt out this time

WhatsApp users must agree to share their personal information with Facebook if they want to continue using the messaging service from next month, according to new terms and conditions.

“As part of the Facebook Companies, WhatsApp receives information from, and shares information with, the other Facebook Companies,” its privacy policy, updated this week, stated.

“We may use the information we receive from them, and they may use the information we share with them, to help operate, provide, improve, understand, customize, support, and market our Services and their offerings, including the Facebook Company Products.”

Yes, said information includes your personal information. Thus, in other words, WhatsApp users must allow their personal info to be shared with Facebook and its subsidiaries as and when decided by the tech giant. Presumably, this is to serve personalized advertising.

If you’re a user today, you have two choices: accept this new arrangement, or stop using the end-to-end encrypted chat app (and use something else, like Signal.) The changes are expected to take effect on February 8.

When WhatsApp was acquired by Facebook in 2014, it promised netizens that its instant-messaging app would not collect names, addresses, internet searches, or location data. CEO Jan Koum wrote in a blog post: “Above all else, I want to make sure you understand how deeply I value the principle of private communication. For me, this is very personal. I was born in Ukraine, and grew up in the USSR during the 1980s.

“One of my strongest memories from that time is a phrase I’d frequently hear when my mother was talking on the phone: ‘This is not a phone conversation; I’ll tell you in person.’ The fact that we couldn’t speak freely without the fear that our communications would be monitored by KGB is in part why we moved to the United States when I was a teenager.”

Two years later, however, that vow was eroded by, well, capitalism, and WhatsApp decided it would share its users’ information with Facebook though only if they consented. That ability to opt-out, however, will no longer be an option from next month. Koum left in 2018.

That means users who wish to keep using WhatsApp must be prepared to give up personal info such as their names, profile pictures, status updates, phone numbers, contacts lists, and IP addresses, as well as data about their mobile devices, such as model numbers, operating system versions, and network carrier details, to the mothership. If users engage with businesses via the app, order details such as shipping addresses and the amount of money spent can be passed to Facebook, too.

Source: If you’re a WhatsApp user, you’ll have to share your personal data with Facebook from next month – and no, you can’t opt out this time • The Register

Singapore police can access now data from the country’s contract tracing app

With a nearly 80 percent uptake among the country’s population, Singapore’s TraceTogether app is one of the best examples of what a successful centralized contact tracing effort can look like as countries across the world struggle to contain the coronavirus pandemic. To date, more than 4.2 million people in Singapore have download the app or obtained the wearable the government has offered to people.

In contrast to Apple’s and Google’s Exposure Notifications System — which powers the majority of COVID-19 apps out there, including ones put out by states and countries like California and Germany — Singapore’s TraceTogether app and wearable uses the country’s own internally developed BlueTrace protocol. The protocol relies on a centralized reporting structure wherein a user’s entire contact log is uploaded to a server administered by a government health authority. Outside of Singapore, only Australia has so far adopted the protocol.

In an update the government made to the platform’s privacy policy on Monday, it added a paragraph about how police can use data collected through the platform. “TraceTogether data may be used in circumstances where citizen safety and security is or has been affected,” the new paragraph states. “Authorized Police officers may invoke Criminal Procedure Code (CPC) powers to request users to upload their TraceTogether data for criminal investigations.”

Previous versions of the privacy policy made no mention of the fact police could access any data collected by the app; in fact, the website used to say, “data will only be used for COVID-19 contact tracing.” The government added the paragraph after Singapore’s opposition party asked the Minister of State for Home Affairs if police could use the data for criminal investigations. “We do not preclude the use of TraceTogether data in circumstances where citizens’ safety and security is or has been affected, and this applies to all other data as well,” said Minister Desmond Tan.

What’s happening in Singapore is an example of the exact type of potential privacy nightmare that experts warned might happen with centralized digital contact tracing efforts. Worse, a loss of trust in the privacy of data could push people further away from contact tracing efforts altogether, putting everyone at more risk.

Source: Singapore police can access data from the country’s contract tracing app | Engadget

Julian Assange will NOT be extradited to the US over WikiLeaks hacking and spy charges, rules British judge

Accused hacker and WikiLeaks founder Julian Assange should not be extradited to the US to stand trial, Westminster Magistrates’ Court has ruled.

District Judge Vanessa Baraitser told Assange this morning that there was no legal obstacle to his being sent to the US, where he faces multiple criminal charges under America’s Espionage Act and Computer Fraud and Abuse Act over his WikiLeaks website.

Assange is a suicide risk and the judge decided not to order his extradition to the US, despite giving a ruling in which she demolished all of his legal team’s other arguments against extradition.

“I am satisfied that the risk that Mr Assange will commit suicide is a substantial one,” said the judge, sitting at the Old Bailey, in this morning’s ruling. Adopting the conclusions of medical expert Professor Michael Kopelman, an emeritus professor of neuropsychiatry at King’s College London, Judge Baraitser continued:

Taking account of all of the information available to him, he considered Mr Assange’s risk of suicide to be very high should extradition become imminent. This was a well-informed opinion carefully supported by evidence and explained over two detailed reports.

[…]

All other legal arguments against extradition rejected

Judge Baraitser heard from Assange’s lawyers during this case that he was set to be extradited because he had politically embarrassed the US, rather than committed any genuine criminal offence.

Nonetheless, US lawyers successfully argued that Assange’s actions were outside journalistic norms, with the judge approvingly quoting news articles from The Guardian and New York Times that condemned him for dumping about 250,000 stolen US diplomatic cables online in clear text.

“Free speech does not comprise a ‘trump card’ even where matters of serious public concern are disclosed,” said the judge in a passage that will be alien to American readers, whose country’s First Amendment reverses that position.

[…]

The judge also found that the one-time WikiLeaker-in-chief had directly commissioned a range of people to hack into various Western countries’ governments, banks and commercial businesses, including the Gnosis hacking crew that was active in the early 2010s.

Judge Baraitser also dismissed Assange’s legal arguments that publishing stolen US government documents on WikiLeaks was not a crime in the UK, ruling that had he been charged in the UK, he would have been guilty of offences under the Official Secrets Acts 1911-1989. Had his conduct not been a crime in the UK, that would have been a powerful blow against extradition.

[…]

Summing up the thoughts of most if not all people following Assange’s case when the verdict was given, NSA whistleblower Edward Snowden took to Twitter:

Having had all of his substantive legal arguments dismissed, there isn’t much for Assange and his supporters to cheer about today. It is certain that the US will throw as much legal muscle at the appeal as it possibly can. With some British prisoners successfully avoiding extradition by expressing suicidal thoughts, it is likely American prosecutors will want to set a UK precedent that overturns the suicide barrier.

Source: Julian Assange will NOT be extradited to the US over WikiLeaks hacking and spy charges, rules British judge • The Register