The Linkielist

Linking ideas with the world

The Linkielist

Amazon Plans to Install Creepy Always-On Surveillance Cameras in Delivery Vans

Not content to only wield its creepy surveillance infrastructure against warehouse workers and employees considering unionization, Amazon is reportedly gearing up to install perpetually-on cameras inside its fleet of delivery vehicles as well.

A new report from The Information claims that Amazon recently shared the plans in an instructional video sent out to the contractor workers who drive the Amazon-branded delivery vans.

In the video, the company reportedly explains to drivers that the high-tech video cameras will use artificial intelligence to determine when drivers are engaging in risky behavior, and will give out verbal warnings including “Distracted driving,” “No stop detected” and “Please slow down.”

According to a video posted to Vimeo a week ago, the hardware and software for the cameras will be provided through a partnership with California-based company Netradyne, which is also responsible for a platform called Driveri that similarly uses artificial intelligence to analyze a driver’s behavior as they operate a vehicle.

While the camera’s automated feedback will be immediate, other data will also reportedly be stored for later analysis that will help the company to evaluate its fleet of drivers.

Although it’s not clear when Amazon plans to install the cameras or how many of the vehicles in the company’s massive fleet will be outfitted with them, the company told The Information in a statement that the software will be implemented in the spirit of increasing safety precautions and not, you know, bolstering an insidious and growing surveillance apparatus.

Source: Amazon Plans to Install Always-On Surveillance Cameras in Delivery Vans

Synology to enforce use of validated disks in enterprise NAS boxes. And guess what? Only its own disks exceed 4TB

Synology has introduced its first-ever list of validated disks and won’t allow other devices into its enterprise-class NAS devices. And in a colossal coincidence, half of the disks allowed into its devices – and the only ones larger than 4TB – are Synology’s very own HAT 5300 disks that it launched last week.

Seeing as privately held Synology is thought to have annual revenue of around US$350m, rather less than the kind of cash required to get into the hard disk business, The Register inquired if it had really started making drives or found some other way into the industry.

The Taiwanese network-attached-storage vendor told us the drives are Synology-branded Toshiba kit, though it has written its own drive firmware and that the code delivers sequential read performance 23 per cent beyond comparable drives. Synology told us its branded disks will also be more reliable because they have undergone extensive testing in the company’s own NAS arrays.

[…]

So to cut a long story short, if you want to get the most out of Synology NAS devices, you’ll need to buy Synology’s own SATA hard disk drives.

The new policy applies as of the release of three new Synology NAS appliances intended for enterprise use and will be applied to other models over time.

The new models include the RS3621RPxs, which sports an unspecified six-core Intel Xeon processor and can handle a dozen drives, then move data over four gigabit Ethernet ports. The middle-of-the-road RS3621xs+ offers an eight-core Xeon and adds two 10GE ports. At the top of the range, the RS4021xs+ stretches to 3U and adds 16GB of RAM, eight more than found in the other two models.

[…]

Source: Synology to enforce use of validated disks in enterprise NAS boxes. And guess what? Only its own disks exceed 4TB • The Register

I guess HDD vendor lock in is a really really good reason to not buy Synology then.

ProtonMail, Tutanota among authors of letter urging EU to reconsider encryption rules

Encrypted service providers are urging lawmakers to back away from a controversial plan that critics say would undercut effective data protection measures.

ProtonMail, Threema, Tresorit and Tutanota — all European companies that offer some form of encrypted services — issued a joint statement this week declaring that a resolution the European Council adopted on Dec. 14 is ill-advised. That measure calls for “security through encryption and security despite encryption,” which technologists have interpreted as a threat to end-to-end encryption. In recent months governments around the world, including the U.S., U.K., Australia, New Zealand, Canada, India and Japan, have been reigniting conversations about law enforcement officials’ interest in bypassing encryption, as they have sporadically done for years.

In a letter that will be sent to council members on Thursday, the authors write that the council’s stated goal of endorsing encryption, and the council’s argument that law enforcement authorities must rely on accessing electronic evidence “despite encryption,” contradict one another. The advancement of legislation that forces technology companies to guarantee police investigators a way to intercept user messages, for instance, repeatedly has been scrutinized by technology leaders who argue there is no way to stop such a tool from being abused.

The resolution “will threaten the basic rights of millions of Europeans and undermine a global shift towards adopting end-to-end encryption,” say the companies, which offer users either encrypted email, file-sharing or messaging.

“[E]ncryption is an absolute, data is either encrypted or it isn’t, users have privacy or they don’t,” the letter, which was shared with CyberScoop in advance, states. “The desire to give law enforcement more tools to fight crime is obviously understandable. But the proposals are the digital equivalent of giving law enforcement a key to every citizens’ home and might begin a slippery slope towards greater violations of personal privacy.”

[…]

Source: ProtonMail, Tutanota among authors of letter urging EU to reconsider encryption rules

Firefox 85 removes support for Flash and adds protection against supercookies

Mozilla has released Firefox 85 ending support for Adobe Flash Player plugin and has brought in ways to block supercookies to enhance a user’s privacy. Mozilla, in a blog post, noted that supercookies are store user identifiers, and are much more difficult to delete and block. It further noted that the changes it is making through network partitioning in Firefox 85 will “reduce the effectiveness of cache-based supercookies by eliminating a tracker’s ability to use them across websites.”

“Trackers can abuse caches to create supercookies and can use connection identifiers to track users. But by isolating caches and network connections to the website they were created on, we make them useless for cross-site tracking,” Mozilla noted.

It explained that the network partitioning works by splitting the Firefox browser cache on a per-website basis, a technical solution that prevents websites from tracking users as they move across the web. Mozilla also noted that by removing support for Flash, there was not much impact on the page load time. The development was first reported by ZDNet.

[…]

Source: Firefox 85 removes support for Flash and adds protection against supercookies – Technology News

Fedora’s Chromium maintainer suggests switching to Firefox as Google yanks features in favour of Chrome

Fedora’s maintainer for the open-source Chromium browser package is recommending users consider switching to Firefox following Google’s decision to remove functionality and make it exclusive to its proprietary Chrome browser.The comments refer to a low-key statement Google made just before the release of Chrome 88, saying that during an audit it had “discovered that some third-party Chromium-based browsers were able to integrate Google features, such as Chrome sync and Click to Call, that are only intended for Google’s use… we are limiting access to our private Chrome APIs starting on March 15, 2021.”Tom Callaway (aka “spot”), a former Fedora engineering manager at Red Hat (Fedora is Red Hat’s bleeding-edge Linux distro), who now works for AWS, remarked when describing the Chromium 88 build that: “Google gave the builders of distribution Chromium packages these access rights back in 2013 via API keys, specifically so that we could have open-source builds of Chromium with (near) feature parity to Chrome. And now they’re taking it away.”The reasoning given for this change? Google does not want users to be able to ‘access their personal Chrome Sync data (such as bookmarks)… with a non-Google, Chromium-based browser.’ They’re not closing a security hole, they’re just requiring that everyone use Chrome.”Features in Chromium like data sync depend on Google APIs which are soon to be blockedFeatures in Chromium like data sync depend on Google APIs which are soon to be blockedCallaway predicted that “many (most?) users will be confused/annoyed when API functionality like sync and geolocation stops working for no good reason.” Although API access is not yet blocked, he has disabled it immediately to avoid users experiencing features that suddenly stop working for no apparent reason.He said he is no longer sure of the value of Chromium. “I would say that you might want to reconsider whether you want to use Chromium or not. If you want the full ‘Google’ experience, you can run the proprietary Chrome. If you want to use a FOSS browser that isn’t hobbled, there is a Firefox package in Fedora,” he said.Ahem, just ‘discovered’ this?There is more information about these APIs on the Chromium wiki. Access to the APIs is documented and Google’s claim that it has only just “discovered” this is an oddity. The APIs cover areas including sync, spelling, translation, Google Maps geolocation, Google Cloud Storage, safe browsing, and more.The situation has parallels with Android, where the Android Open Source Project (AOSP) is hard to use as a mobile phone operating system because important functions are reserved for the proprietary Google Play Services. The microG project exists specifically as an attempt to mitigate the absence of these APIs from AOSP.Something similar may now be necessary for Chromium if it is to deliver all the features users have come to expect from a web browser. It is not a problem for companies in a position to provide their own alternative services, such as Microsoft with Chromium-based Edge, but more difficult for Linux distros like Fedora.There are other ways to look at Google’s move, though. “Some people might even consider the removal of this Google-specific functionality an improvement,” commented a Fedora user. Microsoft reportedly removed more than 50 Google-specific services from Chromium as used in Edge, including data sync, safe browsing, maps geolocation, the Google Drive API, and more.Users who choose Chromium over Chrome to avoid Google dependency may not realise the extent of this integration, which is likely now to reduce. The Ungoogled Chromium project not only removes Google APIs but also “blocks internal requests to Google at runtime” as a failsafe measure.

Source: Fedora’s Chromium maintainer suggests switching to Firefox as Google yanks features in favour of Chrome • The Register

Indian government slams Facebook over WhatsApp ‘privacy’ update, wants its own Europe-style opt-out switch

The Indian government has sent a fierce letter to Facebook over its decision to update the privacy rules around its WhatsApp chat service, and asked the antisocial media giant to put a halt to the plans.In an email from the IT ministry to WhatsApp head Will Cathcart, provided to media outlets, the Indian government notes that the proposed changes “raise grave concerns regarding the implications for the choice and autonomy of Indian citizens.”In particular, the ministry is incensed that European users will be given a choice to opt out over sharing WhatsApp data with the larger Facebook empire, as well as businesses using the platform to communicate with customers, while Indian users will not.“This differential and discriminatory treatment of Indian and European users is attracting serious criticism and betrays a lack of respect for the rights and interest of Indian citizens who form a substantial portion of WhatsApp’s user base,” the letter says. It concludes by asking WhatsApp to “withdraw the proposed changes.”IndiaIndia’s top techies form digital foundation to fight Apple and GoogleREAD MOREThe reason that Europe is being treated as a special case by Facebook is, of course, the existence of the GDPR privacy rules that Facebook has repeatedly flouted and as a result faces pan-European legal action.

Source: Indian government slams Facebook over WhatsApp ‘privacy’ update, wants its own Europe-style opt-out switch • The Register

Brave Will Become First Browser To Offer IPFS peer to peer content hosting

On Tuesday, privacy-focused browser Brave released an update that makes it the first to feature peer-to-peer protocol for hosting web content.

Known as IPFS, which stands for InterPlanetary File System, the protocol allows users to load content from a decentralized network of distributed nodes rather than a centralized server. It’s new — and much-heralded — technology, and could eventually supplant the Hypertext Transfer Protocol (HTTP) that dominates our current internet infrastructure.

“We’re thrilled to be the first browser to offer a native IPFS integration with today’s Brave desktop browser release,” said Brian Bondy, CTO and co-founder of Brave. “Integrating the IPFS open-source network is a key milestone in making the Web more transparent, decentralized, and resilient.”

The new protocol promises several inherent advantages over HTTP, with faster web speeds, reduced costs for publishers and a much smaller possibility of government censorship among them.

“Today, Web users across the world are unable to access restricted content, including, for example, parts of Wikipedia in Thailand, over 100,000 blocked websites in Turkey and critical access to COVID-19 information in China,” said IPFS project lead Molly Mackinlay told Engadget. “Now anyone with an internet connection can access this critical information through IPFS on the Brave browser.”

In an email to Vice, IPFS founder Juan Benet said that he finds it concerning that the internet has become as centralized as it has, leaving open the possibility that it could “disappear at any moment, bringing down all the data with them—or at least breaking all the links.”

“Instead,” he continued, “we’re pushing for a fully distributed web, where applications don’t live at centralized servers, but operate all over the network from users’ computers…a web where content can move through any untrusted middlemen without giving up control of the data, or putting it at risk.”

Source: Brave Will Become First Browser To Offer IPFS

How to batch export ALL your WhatsApp chats in one go for non rooted Android on PC

It’s a process that requires quite some installation and some good reading of the instructions but it can be done.

The trick is to install an older version of WhatsApp, extract the key and then copy the message databases. Then you can decrypt the database file and read it using another program. The hardest bit is extracting the key. Once you have that it’s all pretty fast. Apple IOS users have a definite advantage here because they can easily get to the key file.

Here’s my writeup on xda-developers.com

v4.7-E1.0

You need to download WhatsApp-2.11.431.apk and abe-all.jar
Then rename WhatsApp-2.11.431.apk to LegacyWhatsApp.apk and copy it to the tmp/ directory
Rename abe-all.jar to abe.jar and copy it to the bin/ directory

Run the script.

Make sure you enable File transfer mode on the phone after you connect it

Also, I needed to open the old version of WhatsApp before making the backup in the script – fortunately the script waits here for a password! First it wants you to update: don’t! I got a phone date is inaccurate error. Just wait on this screen and then continue on with the script. The script goes silent here for quite some time.

The best instructions are to be found here by PIRATA! but miss the above few steps.

forum.xda-developers.com

[Tool] WhatsApp Key/DB Extractor | CRYPT6-12 | NON-ROOT | UPDATED OCTOBER 2016

** Version 4.7 Updated October 2016 – Supports Android 4.0-7.0 ** SUMMARY: Allows WhatsApp users to extract their cipher key and databases on non-rooted Android devices. UPDATE: This tool was last updated on October 12th 2016. and confirmed… forum.xda-developers.com forum.xda-developers.com
Good luck!

AI upstart stealing facial data told to delete data and algorithms

Everalbum, a consumer photo app maker that shut down on August 31, 2020, and has since relaunched as a facial recognition provider under the name Paravision, on Monday reached a settlement with the FTC over the 2017 introduction of a feature called “Friends” in its discontinued Ever app. The watchdog agency claims the app deployed facial recognition code to organize users’ photos by default, without permission.

According to the FTC, between July 2018 and April 2019, Everalbum told people that it would not employ facial recognition on users’ content without consent. The company allegedly let users in certain regions – Illinois, Texas, Washington, and the EU – make that choice, but automatically activated the feature for those located elsewhere.

The agency further claims that Everalbum’s use of facial recognition went beyond supporting the Friends feature. The company is alleged to have combined users’ faces with facial images from other information to create four datasets that informed its facial recognition technology, which became the basis of a face detection service for enterprise customers.

The company also is said to have told consumers using its app that it would delete their data if they deactivated their accounts, but didn’t do so until at least October 2019.

The FTC, in announcing the case and its settlement, said Everalbum/Paravision will be required to delete: photos and videos belonging to Ever app users who deactivated their accounts; all face embeddings – vector representations of facial features – from users who did not grant consent; and “any facial recognition models or algorithms developed with Ever users’ photos or videos.”

The FTC has not done this in past privacy cases with technology companies. According to FTC Commissioner Rohit Chopra, when Google and YouTube agreed to pay $170m over allegations the companies had collected data from children without parental consent, the FTC settlement “allowed Google and YouTube to profit from its conduct, even after paying a civil penalty.”

Likewise, when the FTC voted to approve a settlement with Facebook over claims it had violated its 2012 privacy settlement agreement, he said, Facebook did not have to give up any of its facial recognition technology or data.

“Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” said Chopra in a statement [PDF]. “This is an important course correction.”

[…]

Source: Privacy pilfering project punished by FTC purge penalty: AI upstart told to delete data and algorithms • The Register

NYPD posts surveillance systems and use and requests comments

Beginning, January 11, 2020, draft surveillance technology impact and use policies will be posted on the Department’s website. Members of the public are invited to review the impact and use policies and provide feedback on their contents. The impact and use policies provide details of: 1) the capabilities of the Department’s surveillance technologies, 2) the rules regulating the use of the technologies, 3) protections against unauthorized access of the technologies or related data, 4) surveillance technologies data retention policies, 5) public access to surveillance technologies data, 6) external entity access to surveillance technologies data, 7) Department trainings in the use of surveillance technologies, 8) internal audit and oversight mechanisms of surveillance technologies, 9) health and safety reporting on the surveillance technologies, and 10) potential disparate impacts of the impact and use policies for surveillance technologies.

Source: Draft Policies for Public Comment

WhatsApp delays enforcement of privacy terms by 3 months, following backlash

WhatsApp said on Friday that it won’t enforce the planned update to its data-sharing policy until May 15, weeks after news about the new terms created confusion among its users, exposed the Facebook app to a potential lawsuit, triggered a nationwide investigation and drove tens of millions of its loyal fans to explore alternative messaging apps.

“We’re now moving back the date on which people will be asked to review and accept the terms. No one will have their account suspended or deleted on February 8. We’re also going to do a lot more to clear up the misinformation around how privacy and security works on WhatsApp. We’ll then go to people gradually to review the policy at their own pace before new business options are available on May 15,” the firm said in a blog post.

Source: WhatsApp delays enforcement of privacy terms by 3 months, following backlash | TechCrunch

I’m pretty sure there is no confusion. People just don’t want all their data shared to Facebook when they were promised it wouldn’t be. So they are leaving to Signal and Telegram.

Turkey launches antitrust probe into WhatsApp and Facebook over the new privacy policy. Still can’t export Whatsapp chats.

WhatsApp updated its privacy policy at the turn of the new year. Users were notified via a popup message upon opening the app that their data would now be shared with Facebook and other companies come February 8. Due to Facebook’s notorious history with user data and privacy, the new update has since then garnered criticism with many people migrating to alternative messaging apps like Signal and Telegram. Microsoft entered the playing field too, recommending users to use Skype in place of the Facebook-owned WhatsApp.

In the latest, Turkey has now launched an antitrust probe into Facebook and WhatsApp regarding the updated privacy policy. Bloomberg reports that:

Turkey’s antitrust board launched an investigation into Facebook Inc. and its messaging service WhatsApp Inc. over new usage terms that have sparked privacy concerns.

[…]

The regulator also said it was halting implementation of such terms, it said on Monday. The new terms would result in “more data being collected, processed and used by Facebook,” according to the statement.

Source: Turkey launches antitrust probe into WhatsApp and Facebook over the new privacy policy – Neowin

Every Deleted Parler Post, Many With Users’ Location Data, Has Been Archived. Parler goes down. Still can’t export your Whatsapp history.

In the wake of the violent insurrection at the U.S. Capitol by scores of President Trump’s supporters, a lone researcher began an effort to catalogue the posts of social media users across Parler, a platform founded to provide conservative users a safe haven for uninhibited “free speech” — but which ultimately devolved into a hotbed of far-right conspiracy theories, unchecked racism, and death threats aimed at prominent politicians.

The researcher, who asked to be referred to by her Twitter handle, @donk_enby, began with the goal of archiving every post from January 6, the day of the Capitol riot; what she called a bevy of “very incriminating” evidence. According to the Atlantic Council’s Digital Forensic Research Lab, among other sources, Parler is one of a several apps used by the insurrections to coordinate their breach of the Capitol, in a plan to overturn the 2020 election results and keep Donald Trump in power.

Five people died in the attempt.

Hoping to create a lasting public record for future researchers to sift through, @donk_enby began by archiving the posts from that day. The scope of the project quickly broadened, however, as it became increasingly clear that Parler was on borrowed time. Apple and Google announced that Parler would be removed from their app stores because it had failed to properly moderate posts that encouraged violence and crime. The final nail in the coffin came Saturday when Amazon announced it was pulling Parler’s plug.

In an email first obtained by BuzzFeed News, Amazon officials told the company they planned to boot it from its clouding hosting service, Amazon Web Services, saying it had witnessed a “steady increase” in violent content across the platform. “It’s clear that Parler does not have an effective process to comply with the AWS terms of service,” the email read.

Operating on little sleep, @donk_enby began the work of archiving all of Parler’s posts, ultimately capturing around 99.9 percent of its content. In a tweet early Sunday, @donk_enby said she was crawling some 1.1 million Parler video URLs. “These are the original, unprocessed, raw files as uploaded to Parler with all associated metadata,” she said. Included in this data tranche, now more than 56 terabytes in size, @donk_enby confirmed that the raw video files include GPS metadata pointing to exact locations of where the videos were taken.

@donk_enby later shared a screenshot showing the GPS position of a particular video, with coordinates in latitude and longitude.

The privacy implications are obvious, but the copious data may also serve as a fertile hunting ground for law enforcement. Federal and local authorities have arrested dozens of suspects in recent days accused of taking part in the Capitol riot, where a Capitol police officer, Brian Sicknick, was fatally wounded after being struck in the head with a fire extinguisher.

[…]

Kirtaner, creator of 420chan — a.k.a. Aubrey Cottle — reported obtaining 6.3 GB of Parler user data from an unsecured AWS server in November. The leak reportedly contained passwords, photos and email addresses from several other companies as well. Parler CEO John Matze later claimed to Business Insider that the data contained only “public information” about users, which had been improperly stored by an email vendor whose contract was subsequently terminated over the leak. (This leak is separate from the debunked claim that Parler was “hacked” in late November, proof of which was determined to be fake.)

In December, Twitter suspended Kirtaner for tweeting, “I’m killing Parler and its fucking glorious,” citing its rules against threatening “violence against an individual or group of people.” Kirtaner’s account remains suspended despite an online campaign urging Twitter’s safety team to reverse its decision. Gregg Housh, an internet activist involved in many early Anonymous campaigns, noted online that the tweet was “not aimed at a person and [was] not actually violent.”

Source: Every Deleted Parler Post, Many With Users’ Location Data, Has Been Archived

ODoH: Cloudflare and Apple design a new privacy-friendly internet protocol for DNS

Engineers at Cloudflare and Apple say they’ve developed a new internet protocol that will shore up one of the biggest holes in internet privacy that many don’t know even exists. Dubbed Oblivious DNS-over-HTTPS, or ODoH for short, the new protocol makes it far more difficult for internet providers to know which websites you visit.

But first, a little bit about how the internet works.

Every time you go to visit a website, your browser uses a DNS resolver to convert web addresses to machine-readable IP addresses to locate where a web page is located on the internet. But this process is not encrypted, meaning that every time you load a website the DNS query is sent in the clear. That means the DNS resolver — which might be your internet provider unless you’ve changed it — knows which websites you visit. That’s not great for your privacy, especially since your internet provider can also sell your browsing history to advertisers.

Recent developments like DNS-over-HTTPS (or DoH) have added encryption to DNS queries, making it harder for attackers to hijack DNS queries and point victims to malicious websites instead of the real website you wanted to visit. But that still doesn’t stop the DNS resolvers from seeing which website you’re trying to visit.

Enter ODoH, which builds on previous work by Princeton academics. In simple terms, ODoH decouples DNS queries from the internet user, preventing the DNS resolver from knowing which sites you visit.

Here’s how it works: ODoH wraps a layer of encryption around the DNS query and passes it through a proxy server, which acts as a go-between the internet user and the website they want to visit. Because the DNS query is encrypted, the proxy can’t see what’s inside, but acts as a shield to prevent the DNS resolver from seeing who sent the query to begin with.

“What ODoH is meant to do is separate the information about who is making the query and what the query is,” said Nick Sullivan, Cloudflare’s head of research.

In other words, ODoH ensures that only the proxy knows the identity of the internet user and that the DNS resolver only knows the website being requested. Sullivan said that page loading times on ODoH are “practically indistinguishable” from DoH and shouldn’t cause any significant changes to browsing speed.

A key component of ODoH working properly is ensuring that the proxy and the DNS resolver never “collude,” in that the two are never controlled by the same entity, otherwise the “separation of knowledge is broken,” Sullivan said. That means having to rely on companies offering to run proxies.

Sullivan said a few partner organizations are already running proxies, allowing for early adopters to begin using the technology through Cloudflare’s existing 1.1.1.1 DNS resolver. But most will have to wait until ODoH is baked into browsers and operating systems before it can be used. That could take months or years, depending on how long it takes for ODoH to be certified as a standard by the Internet Engineering Task Force.

Source: Cloudflare and Apple design a new privacy-friendly internet protocol | TechCrunch

Content Moderation Case Study: SoundCloud Combats Piracy By Giving Universal Music The Power To Remove Uploads (2014)

n most cases, allegedly infringing content is removed at the request of rights holders following the normal DMCA takedown process. A DMCA notice is issued and the site responds by removing the content and — in some cases — allowing the uploader to challenge the takedown.

SoundCloud has positioned itself as a host of user-created audio content, relying on content creators to upload original works. But, like any content hosting site, it often found itself hosting infringing content not created by the uploader.

Realizing the potential for SoundCloud to be overrun with infringing content, the platform became far more proactive as it gained users and funding.

Rather than allow the normal DMCA process to work, SoundCloud allowed one major label to set the terms of engagement. This partnership resulted in Universal being able to unilaterally remove content it believed was infringing without any input from SoundCloud or use of the normal DMCA process.

One user reported his account was closed due to alleged infringement contained in his uploaded radio shows. When he attempted to dispute the removals and the threatened shuttering of his account, he was informed by the platform it was completely out of SoundCloud’s hands.

Your uploads were removed directly by Universal. This means that SoundCloud had no control over it, and they don’t tell us which part of your upload was infringing.

The control of removing content is completely with Universal. This means I can’t tell you why they removed your uploads and not others, and you would really need to ask them that question.

Unfortunately, there was no clear appeal process for disputing the takedown, leaving the user without his account or his uploads.

[…]

SoundCloud continues to allow labels like Universal to perform content removals without utilizing the DMCA process or engaging with the platform directly. Users are still on their own when it comes to content declared infringing by labels. This appears to flow directly from SoundCloud’s long-running efforts to secure licensing agreements with major labels. And that appears to flow directly from multiple threats of copyright litigation from some of the same labels SoundCloud is now partnered with

Source: Content Moderation Case Study: SoundCloud Combats Piracy By Giving Universal Music The Power To Remove Uploads (2014) | Techdirt

And having said that, DMCA is a process that is very very far from perfect and is used to bully smaller players in the market by the big boys with big lawyer pockets.

Whatsapp locks you in – you can’t export all your chats. And now it will share everything with Facebook

Yes, you can back up your database and if you’re rooted, you can find a key to it, but that will give you the chat in database form. You won’t get your pictures and videos etc. You can download them seperately though.

Yes, you can export a single chat / chatgroup, but doing that for the thousands of chats you probably have is not practically possible.

So you’re stuck. Either you share all your data with Facebook or you get rid of your chat history.

Lock down the permissions WhatsApp has – it has way too many!

WhatsApp Has Shared Your Data With Facebook since 2016, actually.

Since Facebook acquired WhatsApp in 2014, users have wondered and worried about how much data would flow between the two platforms. Many of them experienced a rude awakening this week, as a new in-app notification raises awareness about a step WhatsApp actually took to share more with Facebook back in 2016.

On Monday, WhatsApp updated its terms of use and privacy policy, primarily to expand on its practices around how WhatsApp business users can store their communications. A pop-up has been notifying users that as of February 8, the app’s privacy policy will change and they must accept the terms to keep using the app. As part of that privacy policy refresh, WhatsApp also removed a passage about opting out of sharing certain data with Facebook: “If you are an existing user, you can choose not to have your WhatsApp account information shared with Facebook to improve your Facebook ads and products experiences.”

Some media outlets and confused WhatsApp users understandably assumed that this meant WhatsApp had finally crossed a line, requiring data-sharing with no alternative. But in fact the company says that the privacy policy deletion simply reflects how WhatsApp has shared data with Facebook since 2016 for the vast majority of its now 2 billion-plus users.

When WhatsApp launched a major update to its privacy policy in August 2016, it started sharing user information and metadata with Facebook. At that time, the messaging service offered its billion existing users 30 days to opt out of at least some of the sharing. If you chose to opt out at the time, WhatsApp will continue to honor that choice. The feature is long gone from the app settings, but you can check whether you’re opted out through the “Request account info” function in Settings.

Meanwhile, the billion-plus users WhatsApp has added since 2016, along with anyone who missed that opt-out window, have had their data shared with Facebook all this time. WhatsApp emphasized to WIRED that this week’s privacy policy changes do not actually impact WhatsApp’s existing practices or behavior around sharing data with Facebook.

[…]

None of this has at any point impacted WhatsApp’s marquee feature: end-to-end encryption. Messages, photos, and other content you send and receive on WhatsApp can only be viewed on your smartphone and the devices of the people you choose to message with. WhatsApp and Facebook itself can’t access your communications.

[…]

In practice, this means that WhatsApp shares a lot of intel with Facebook, including  account information like your phone number, logs of how long and how often you use WhatsApp, information about how you interact with other users, device identifiers, and other device details like IP address, operating system, browser details, battery health information, app version, mobile network, language and time zone. Transaction and payment data, cookies, and location information are also all fair game to share with Facebook depending on the permissions you grant WhatsApp in the first place.

[…]

Source: WhatsApp Has Shared Your Data With Facebook for Years, Actually | WIRED

If you’re a WhatsApp user, you’ll have to share your personal data with Facebook from next month – and no, you can’t opt out this time

WhatsApp users must agree to share their personal information with Facebook if they want to continue using the messaging service from next month, according to new terms and conditions.

“As part of the Facebook Companies, WhatsApp receives information from, and shares information with, the other Facebook Companies,” its privacy policy, updated this week, stated.

“We may use the information we receive from them, and they may use the information we share with them, to help operate, provide, improve, understand, customize, support, and market our Services and their offerings, including the Facebook Company Products.”

Yes, said information includes your personal information. Thus, in other words, WhatsApp users must allow their personal info to be shared with Facebook and its subsidiaries as and when decided by the tech giant. Presumably, this is to serve personalized advertising.

If you’re a user today, you have two choices: accept this new arrangement, or stop using the end-to-end encrypted chat app (and use something else, like Signal.) The changes are expected to take effect on February 8.

When WhatsApp was acquired by Facebook in 2014, it promised netizens that its instant-messaging app would not collect names, addresses, internet searches, or location data. CEO Jan Koum wrote in a blog post: “Above all else, I want to make sure you understand how deeply I value the principle of private communication. For me, this is very personal. I was born in Ukraine, and grew up in the USSR during the 1980s.

“One of my strongest memories from that time is a phrase I’d frequently hear when my mother was talking on the phone: ‘This is not a phone conversation; I’ll tell you in person.’ The fact that we couldn’t speak freely without the fear that our communications would be monitored by KGB is in part why we moved to the United States when I was a teenager.”

Two years later, however, that vow was eroded by, well, capitalism, and WhatsApp decided it would share its users’ information with Facebook though only if they consented. That ability to opt-out, however, will no longer be an option from next month. Koum left in 2018.

That means users who wish to keep using WhatsApp must be prepared to give up personal info such as their names, profile pictures, status updates, phone numbers, contacts lists, and IP addresses, as well as data about their mobile devices, such as model numbers, operating system versions, and network carrier details, to the mothership. If users engage with businesses via the app, order details such as shipping addresses and the amount of money spent can be passed to Facebook, too.

Source: If you’re a WhatsApp user, you’ll have to share your personal data with Facebook from next month – and no, you can’t opt out this time • The Register

Singapore police can access now data from the country’s contract tracing app

With a nearly 80 percent uptake among the country’s population, Singapore’s TraceTogether app is one of the best examples of what a successful centralized contact tracing effort can look like as countries across the world struggle to contain the coronavirus pandemic. To date, more than 4.2 million people in Singapore have download the app or obtained the wearable the government has offered to people.

In contrast to Apple’s and Google’s Exposure Notifications System — which powers the majority of COVID-19 apps out there, including ones put out by states and countries like California and Germany — Singapore’s TraceTogether app and wearable uses the country’s own internally developed BlueTrace protocol. The protocol relies on a centralized reporting structure wherein a user’s entire contact log is uploaded to a server administered by a government health authority. Outside of Singapore, only Australia has so far adopted the protocol.

In an update the government made to the platform’s privacy policy on Monday, it added a paragraph about how police can use data collected through the platform. “TraceTogether data may be used in circumstances where citizen safety and security is or has been affected,” the new paragraph states. “Authorized Police officers may invoke Criminal Procedure Code (CPC) powers to request users to upload their TraceTogether data for criminal investigations.”

Previous versions of the privacy policy made no mention of the fact police could access any data collected by the app; in fact, the website used to say, “data will only be used for COVID-19 contact tracing.” The government added the paragraph after Singapore’s opposition party asked the Minister of State for Home Affairs if police could use the data for criminal investigations. “We do not preclude the use of TraceTogether data in circumstances where citizens’ safety and security is or has been affected, and this applies to all other data as well,” said Minister Desmond Tan.

What’s happening in Singapore is an example of the exact type of potential privacy nightmare that experts warned might happen with centralized digital contact tracing efforts. Worse, a loss of trust in the privacy of data could push people further away from contact tracing efforts altogether, putting everyone at more risk.

Source: Singapore police can access data from the country’s contract tracing app | Engadget

Julian Assange will NOT be extradited to the US over WikiLeaks hacking and spy charges, rules British judge

Accused hacker and WikiLeaks founder Julian Assange should not be extradited to the US to stand trial, Westminster Magistrates’ Court has ruled.

District Judge Vanessa Baraitser told Assange this morning that there was no legal obstacle to his being sent to the US, where he faces multiple criminal charges under America’s Espionage Act and Computer Fraud and Abuse Act over his WikiLeaks website.

Assange is a suicide risk and the judge decided not to order his extradition to the US, despite giving a ruling in which she demolished all of his legal team’s other arguments against extradition.

“I am satisfied that the risk that Mr Assange will commit suicide is a substantial one,” said the judge, sitting at the Old Bailey, in this morning’s ruling. Adopting the conclusions of medical expert Professor Michael Kopelman, an emeritus professor of neuropsychiatry at King’s College London, Judge Baraitser continued:

Taking account of all of the information available to him, he considered Mr Assange’s risk of suicide to be very high should extradition become imminent. This was a well-informed opinion carefully supported by evidence and explained over two detailed reports.

[…]

All other legal arguments against extradition rejected

Judge Baraitser heard from Assange’s lawyers during this case that he was set to be extradited because he had politically embarrassed the US, rather than committed any genuine criminal offence.

Nonetheless, US lawyers successfully argued that Assange’s actions were outside journalistic norms, with the judge approvingly quoting news articles from The Guardian and New York Times that condemned him for dumping about 250,000 stolen US diplomatic cables online in clear text.

“Free speech does not comprise a ‘trump card’ even where matters of serious public concern are disclosed,” said the judge in a passage that will be alien to American readers, whose country’s First Amendment reverses that position.

[…]

The judge also found that the one-time WikiLeaker-in-chief had directly commissioned a range of people to hack into various Western countries’ governments, banks and commercial businesses, including the Gnosis hacking crew that was active in the early 2010s.

Judge Baraitser also dismissed Assange’s legal arguments that publishing stolen US government documents on WikiLeaks was not a crime in the UK, ruling that had he been charged in the UK, he would have been guilty of offences under the Official Secrets Acts 1911-1989. Had his conduct not been a crime in the UK, that would have been a powerful blow against extradition.

[…]

Summing up the thoughts of most if not all people following Assange’s case when the verdict was given, NSA whistleblower Edward Snowden took to Twitter:

Having had all of his substantive legal arguments dismissed, there isn’t much for Assange and his supporters to cheer about today. It is certain that the US will throw as much legal muscle at the appeal as it possibly can. With some British prisoners successfully avoiding extradition by expressing suicidal thoughts, it is likely American prosecutors will want to set a UK precedent that overturns the suicide barrier.

Source: Julian Assange will NOT be extradited to the US over WikiLeaks hacking and spy charges, rules British judge • The Register

Access To Big Data Turns Farm Machine Makers Into Tech Firms

The combine harvester, a staple of farmers’ fields since the late 1800s, does much more these days than just vacuum up corn, soybeans and other crops. It also beams back reams of data to its manufacturer.

GPS records the combine’s precise path through the field. Sensors tally the number of crops gathered per acre and the spacing between them. On a sister machine called a planter, algorithms adjust the distribution of seeds based on which parts of the soil have in past years performed best. Another machine, a sprayer, uses algorithms to scan for weeds and zap them with pesticides. All the while sensors record the wear and tear on the machines, so that when the farmer who operates them heads to the local distributor to look for a replacement part, it has already been ordered and is waiting for them.

Farming may be an earthy industry, but much of it now takes place in the cloud. Leading farm machine makers like Chicago-based John Deere & Co. DE +1.1% or Duluth’s AGCO AGCO +0.9% collect data from all around the world thanks to the ability of their bulky machines to extract a huge variety of metrics from farmers’ fields and store them online. The farmers who sit in the driver’s seats of these machines have access to the data that they themselves accumulate, but legal murk obfuscates the question of whether they actually own that data and only the machine manufacturer can see all the data from all the machines leased or sold.

[…]

Still, farmers have yet to be fully won over. Many worry that by allowing the transfer of their data to manufacturers, it will inadvertently wind up in the hands of neighboring farmers with whom they compete for scarce land, who could then mine their closely guarded information about the number of acres they plow or the types of fertilizers and pesticides they use, thus gaining a competitive edge. Others fear that information about the type of seeds or fertilizer they use will wind up in the hands of the chemicals companies they buy from, allowing those companies to anticipate their product needs and charge them more, said Jonathan Coppess, a professor at the University of Illinois.

Sensitive to the suggestion that they are infringing on privacy, the largest equipment makers say they don’t share farmers’ data with third parties unless farmers give permission. (Farmers frequently agree to share data with, for example, their local distributors and dealers.)

It’s common to hear that farmers are, by nature, highly protective of their land and business, and that this predisposes them to worry about sharing data even when there are more potential benefits than drawbacks. Still, the concerns are at least partly the result of a lack of legal and regulatory standards around the collection of data from smart farming technologies, observers say. Contracts to buy or rent big machines are many pages long and the language unclear, especially since some of the underlying legal concepts regarding the sharing and collecting of agricultural data are still evolving.

As one 2019 paper puts it, “the lack of transparency and clarity around issues such as data ownership, portability, privacy, trust and liability in the commercial relationships governing smart farming are contributing to farmers’ reluctance to engage in the widespread sharing of their farm data that smart farming facilitates. At the heart of the concerns is the lack of trust between the farmers as data contributors, and those third parties who collect, aggregate and share their data.”

[…]

Some farmers may still find themselves surprised to discover the amount of access Deere and others have to their data. Jacob Maurer is an agronomist with RDO Equipment Co., a Deere dealer, who helps farmers understand how to use their data to work their fields more efficiently. He explained that some farmers would be shocked to learn how much information about their fields he can access by simply tapping into Deere’s vast online stores of data and pulling up their details.

[…]

Based on the mountains of data flowing in to their databases, equipment makers with sufficient sales of machines around the country may in theory actually be able to predict, at least to some small but meaningful extent, the prices of various crops by analyzing the data its machines are sending in — such as “yields” of crops per acre, the amount of fertilizer used, or the average number of seeds of a given crop planted in various regions, all of which would help to anticipate the supply of crops come harvest season.

Were the company then to sell that data to a commodities trader, say, it could likely reap a windfall. Normally, the markets must wait for highly-anticipated government surveys to run their course before having an indication of the future supply of crops. The agronomic data that machine makers collect could offer similar insights but far sooner.

Machine makers don’t deny the obvious value of the data they collect. As AGCO’s Crawford put it: “Anybody that trades grains would love to have their hands on this data.”

Experts occasionally wonder about what companies could do with the data. Mary Kay Thatcher, a former official with the American Farm Bureau, raised just such a concern in an interview with National Public Radio in 2014, when questions about data ownership were swirling after Monsanto began deploying a new “precision planting” tool that required it to have gobs of data.

“They could actually manipulate the market with it. You know, they only have to know the information about what’s actually happening with harvest minutes before somebody else knows it,” Thatcher said in the interview.

“Not saying they will. Just a concern.”

Source: Access To Big Data Turns Farm Machine Makers Into Tech Firms

Apple Told This Developer That His App ‘Promoted’ Drugs – after 6 years in the store

In Apple’s world, an app can be inappropriate one day, but acceptable the next. That’s what the developer of Amphetamine—an app designed to keep Macs from going to sleep, which is useful in situations such as when a file is downloading or when a specific app is running—learned recently when Apple got in touch with him and told him that his app violated the company’s App Store guidelines.

Amphetamine developer William Gustafson published an account of the incident and his experience with Apple’s App Store review team on GitHub on Friday. In the post, Gustafson explained that Apple contacted him on Dec. 29 and told him that Amphetamine, which has been on the Mac App Store for six years, had suddenly begun violating one of the company’s App Store guidelines. Specifically, Gustafson said that Apple claimed that Amphetamine appeared to promote the inappropriate use of controlled substances given its very name—amphetamines are used to treat ADHD—and because its icon includes a pill.

[…]

“As we discussed, we found that your app includes content that some users may find upsetting, offensive, or otherwise objectionable,” an Apple representative told Gustafson on Dec. 29 according to a screenshot shared with Gizmodo. “Specifically, your app name and icon include references to controlled substances, pills.”

The representative then brought up App Store Guideline 1.4.3, which pertains to safety and physical harm. The guideline reads as follows:

“Apps that encourage consumption of tobacco and vape products, illegal drugs, or excessive amounts of alcohol are not permitted on the App Store. Apps that encourage minors to consume any of these substances will be rejected. Facilitating the sale of marijuana, tobacco, or controlled substances (except for licensed pharmacies) isn’t allowed.”

To resolve the issue, the Apple representative said that Gustafson had to remove all content that encourages inappropriate consumption of drugs or alcohol. Gustafson explained in his Github post that Apple had threatened to remove Amphetamine from the Mac App Store on Jan. 12 if he did not oblige with its request for changes.

If this is all sounding a bit wild to you, that’s because it is. Although Amphetamine uses its name and branding to lightheartedly convey the fact that the app will prevent your Mac from going to sleep, it does not do anything that violates Guideline 1.4.3.

Source: Apple Told This Developer That His App ‘Promoted’ Drugs

China’s Secret War for U.S. Data Blew American Spies’ Cover

Around 2013, U.S. intelligence began noticing an alarming pattern: Undercover CIA personnel, flying into countries in Africa and Europe for sensitive work, were being rapidly and successfully identified by Chinese intelligence, according to three former U.S. officials. The surveillance by Chinese operatives began in some cases as soon as the CIA officers had cleared passport control. Sometimes, the surveillance was so overt that U.S. intelligence officials speculated that the Chinese wanted the U.S. side to know they had identified the CIA operatives, disrupting their missions; other times, however, it was much more subtle and only detected through U.S. spy agencies’ own sophisticated technical countersurveillance capabilities.

[…]

CIA officials believed the answer was likely data-driven—and related to a Chinese cyberespionage campaign devoted to stealing vast troves of sensitive personal private information, like travel and health data, as well as U.S. government personnel records. U.S. officials believed Chinese intelligence operatives had likely combed through and synthesized information from these massive, stolen caches to identify the undercover U.S. intelligence officials. It was very likely a “suave and professional utilization” of these datasets, said the same former intelligence official. This “was not random or generic,” this source said. “It’s a big-data problem.”

[…]

In 2010, a new decade was dawning, and Chinese officials were furious. The CIA, they had discovered, had systematically penetrated their government over the course of years, with U.S. assets embedded in the military, the CCP, the intelligence apparatus, and elsewhere. The anger radiated upward to “the highest levels of the Chinese government,” recalled a former senior counterintelligence executive.

Exploiting a flaw in the online system CIA operatives used to secretly communicate with their agents—a flaw first identified in Iran, which Tehran likely shared with Beijing—from 2010 to roughly 2012, Chinese intelligence officials ruthlessly uprooted the CIA’s human source network in China, imprisoning and killing dozens of people.

[…]

The anger in Beijing wasn’t just because of the penetration by the CIA but because of what it exposed about the degree of corruption in China. When the CIA recruits an asset, the further this asset rises within a county’s power structure, the better. During the Cold War it had been hard to guarantee the rise of the CIA’s Soviet agents; the very factors that made them vulnerable to recruitment—greed, ideology, blackmailable habits, and ego—often impeded their career prospects. And there was only so much that money could buy in the Soviet Union, especially with no sign of where it had come from.

But in the newly rich China of the 2000s, dirty money was flowing freely. The average income remained under 2,000 yuan a month (approximately $240 at contemporary exchange rates), but officials’ informal earnings vastly exceeded their formal salaries. An official who wasn’t participating in corruption was deemed a fool or a risk by his colleagues. Cash could buy anything, including careers, and the CIA had plenty of it.

[…]

Over the course of their investigation into the CIA’s China-based agent network, Chinese officials learned that the agency was secretly paying the “promotion fees” —in other words, the bribes—regularly required to rise up within the Chinese bureaucracy, according to four current and former officials. It was how the CIA got “disaffected people up in the ranks. But this was not done once, and wasn’t done just in the [Chinese military],” recalled a current Capitol Hill staffer. “Paying their bribes was an example of long-term thinking that was extraordinary for us,” said a former senior counterintelligence official. “Recruiting foreign military officers is nearly impossible. It was a way to exploit the corruption to our advantage.” At the time, “promotion fees” sometimes ran into the millions of dollars, according to a former senior CIA official: “It was quite amazing the level of corruption that was going on.” The compensation sometimes included paying tuition and board for children studying at expensive foreign universities, according to another CIA officer.

[…]

This was a global problem for the CCP. Corrupt officials, even if they hadn’t been recruited by the CIA while in office, also often sought refuge overseas—where they could then be tapped for information by enterprising spy services. In late 2012, party head Xi Jinping announced a new anti-corruption campaign that would lead to the prosecution of hundreds of thousands of Chinese officials. Thousands were subject to extreme coercive pressure, bordering on kidnapping, to return from living abroad. “The anti-corruption drive was about consolidating power—but also about how Americans could take advantage of [the corruption]. And that had to do with the bribe and promotion process,” said the former senior counterintelligence official.

The 2013 leaks from Edward Snowden, which revealed the NSA’s deep penetration of the telecommunications company Huawei’s China-based servers, also jarred Chinese officials, according to a former senior intelligence analyst.

[…]

By about 2010, two former CIA officials recalled, the Chinese security services had instituted a sophisticated travel intelligence program, developing databases that tracked flights and passenger lists for espionage purposes. “We looked at it very carefully,” said the former senior CIA official. China’s spies “were actively using that for counterintelligence and offensive intelligence. The capability was there and was being utilized.” China had also stepped up its hacking efforts targeting biometric and passenger data from transit hubs, former intelligence officials say—including a successful hack by Chinese intelligence of biometric data from Bangkok’s international airport.

To be sure, China had stolen plenty of data before discovering how deeply infiltrated it was by U.S. intelligence agencies. However, the shake-up between 2010 and 2012 gave Beijing an impetus not only to go after bigger, riskier targets, but also to put together the infrastructure needed to process the purloined information. It was around this time, said a former senior NSA official, that Chinese intelligence agencies transitioned from merely being able to steal large datasets en masse to actually rapidly sifting through information from within them for use. U.S. officials also began to observe that intelligence facilities within China were being physically co-located near language and data processing centers, said this person.

For U.S. intelligence personnel, these new capabilities made China’s successful hack of the U.S. Office of Personnel Management (OPM) that much more chilling. During the OPM breach, Chinese hackers stole detailed, often highly sensitive personnel data from 21.5 million current and former U.S. officials, their spouses, and job applicants, including health, residency, employment, fingerprint, and financial data. In some cases, details from background investigations tied to the granting of security clearances—investigations that can delve deeply into individuals’ mental health records, their sexual histories and proclivities, and whether a person’s relatives abroad may be subject to government blackmail—were stolen as well. Though the United States did not disclose the breach until 2015, U.S. intelligence officials became aware of the initial OPM hack in 2012, said the former counterintelligence executive. (It’s not clear precisely when the compromise actually happened.)

[…]

For some at the CIA, recalled Gail Helt, a former CIA China analyst, the reaction to the OPM breach was, “Oh my God, what is this going to mean for everybody who had ever traveled to China? But also what is it going to mean for people who we had formally recruited, people who might be suspected of talking to us, people who had family members there? And what will this mean for agency efforts to recruit people in the future? It was terrifying. Absolutely terrifying.” Many feared the aftershocks would be widespread. “The concern just wasn’t that [the OPM hack] would curtail info inside China,” said a former senior national security official. “The U.S. and China bump up against each other around the world. It opened up a global Pandora’s box of problems.”

[…]

. During this same period, U.S. officials concluded that Russian intelligence officials, likely exploiting a difference in payroll payments between real State Department employees and undercover CIA officers, had identified some of the CIA personnel working at the U.S. Embassy in Moscow. Officials thought that this insight may have come from data derived from the OPM hack, provided by the Chinese to their Russian counterparts. U.S. officials also wondered whether the OPM hack could be related to an uptick in attempted recruitments by Chinese intelligence of Chinese American translators working for U.S. intelligence agencies when they visited family in China. “We also thought they were trying to get Mandarin speakers to apply for jobs as translators” within the U.S. intelligence community, recalled the former senior counterintelligence official. U.S. officials believed that Chinese intelligence was giving their agents “instructions on how to pass a polygraph.”

But after the OPM breach, anomalies began to multiply. In 2012, senior U.S. spy hunters began to puzzle over some “head-scratchers”: In a few cases, spouses of U.S. officials whose sensitive work should have been difficult to discern were being approached by Chinese and Russian intelligence operatives abroad, according to the former counterintelligence executive. In one case, Chinese operatives tried to harass and entrap a U.S. official’s wife while she accompanied her children on a school field trip to China. “The MO is that, usually at the end of the trip, the lightbulb goes on [and the foreign intelligence service identifies potential persons of interest]. But these were from day one, from the airport onward,” the former official said.

[…]

Source: China’s Secret War for U.S. Data Blew American Spies’ Cover

Firefox to ship ‘network partitioning’ as a new anti-tracking defense

Firefox 85, scheduled to be released next month, in January 2021, will ship with a feature named Network Partitioning as a new form of anti-tracking protection.

The feature is based on “Client-Side Storage Partitioning,” a new standard currently being developed by the World Wide Web Consortium’s Privacy Community Group.

“Network Partitioning is highly technical, but to simplify it somewhat; your browser has many ways it can save data from websites, not just via cookies,” privacy researcher Zach Edwards told ZDNet in an interview this week.

“These other storage mechanisms include the HTTP cache, image cache, favicon cache, font cache, CORS-preflight cache, and a variety of other caches and storage mechanisms that can be used to track people across websites.”

Edwards says all these data storage systems are shared among websites.

The difference is that Network Partitioning will allow Firefox to save resources like the cache, favicons, CSS files, images, and more, on a per-website basis, rather than together, in the same pool.

This makes it harder for websites and third-parties like ad and web analytics companies to track users since they can’t probe for the presence of other sites’ data in this shared pool.

According to Mozilla, the following network resources will be partitioned starting with Firefox 85:

  • HTTP cache
  • Image cache
  • Favicon cache
  • Connection pooling
  • StyleSheet cache
  • DNS
  • HTTP authentication
  • Alt-Svc
  • Speculative connections
  • Font cache
  • HSTS
  • OCSP
  • Intermediate CA cache
  • TLS client certificates
  • TLS session identifiers
  • Prefetch
  • Preconnect
  • CORS-preflight cache

But while Mozilla will be deploying the broadest user data “partitioning system” to date, the Firefox creator isn’t the first.

Edwards said the first browser maker to do so was Apple, in 2013, when it began partitioning the HTTP cache, and then followed through by partitioning even more user data storage systems years later, as part of its Tracking Prevention feature.

Google also partitioned the HTTP cache last month, with the release of Chrome 86, and the results began being felt right away, as Google Fonts lost some of its performance metrics as it couldn’t store fonts in the shared HTTP cache anymore.

The Mozilla team expects similar performance issues for sites loaded in Firefox, but it’s willing to take the hit just to improve the privacy of its users.

“Most policy makers and digital strategists are focused on the death of the 3rd party cookie, but there are a wide variety of other fingerprinting techniques and user tracking strategies that need to be broken by browsers,” Edwards also ZDNet, lauding Mozilla’s move.

PS: Mozilla also said that a side-effect of deploying Network Partitioning is that Firefox 85 will finally be able to block “supercookies” better, a type of browser cookie file that abuses various shared storage mediums to persist in browsers and allow advertisers to track user movements across the web.

Source: Firefox to ship ‘network partitioning’ as a new anti-tracking defense | ZDNet

French Film Company Somehow Trademarks ‘Planet’, Goes After Environmental NGOs For Using The Word

We cover a great many ridiculous and infuriating trademark disputes here, but it’s always the disputes around overly broad terms that never should have been trademarked to begin with that are the most frustrating. And that most irritating of those is when we get into geographic terms that never should be locked up by any single company or entity. Examples in the past have included companies fighting over who gets to use the name of their home city of “Detroit“, or when grocer Iceland Foods got so aggressive in its own trademark enforcement that the — checks notes — nation of Iceland had to seek to revoke the company’s EU trademark registration.

While it should be self-evident how antithetical to the purpose of trademark laws are to even approve of these kinds of marks, I will say that I didn’t see it coming that a company at some point would attempt to play trademark bully over the “planet.”

Powerful French entertainment company Canal Plus trademarked the term in France, but environmental groups are pushing back, saying they should be allowed to use the word “planet” to promote their projects to save it. Multiple cases are under examination by France’s intellectual property regulator INPI, including one coming to a head this week.

Canal Plus argues that the groups’ use of the terms “planete” in French, or “planet” in English, for marketing purposes violates its trademarks, registered to protect its Planete TV channels that showcase nature documentaries.

That this dispute is even a thing raises questions. Why in the world (heh) would any trademark office approve a mark solely on the word “planet”. Such a registration violates all kinds of rules and norms, explicit and otherwise. Geographic terms are supposed to have a high bar for trademark approval. Single word marks not inherently creative typically do as well. And, when trademarks for either are approved, they are typically done so in very narrow terms. That EUIPO somehow managed to approve a trademark that caused a film company to think it can sue or bully NGOs focused on environmental issues for using the word for the rock we all live on together should ring as absurd to anyone who finds out about it.

Certainly it did to those on the other end of Canal Plus’ bullying, as they seemed to think the whole thing was either a joke or attempt at fraud.

The head of environmental group Planete Amazone, Gert-Peter Bruch, thought it was a hoax when he first received a mail from Canal Plus claiming ownership of the planet brand.

[…]

But, as we often note, trademark bullying tends to work. Bruch’s organization is hammering out a deal with Canal Plus in an effort to keep using the word “planet.” That shouldn’t have to occur, but it is. Other groups are waiting on a ruling from the French National Intellectual Property Institute in the hopes that someone somewhere will be sane about all of this.

Source: French Film Company Somehow Trademarks ‘Planet’, Goes After Environmental NGOs For Using The Word | Techdirt