UN Cybercrime Treaty does not define cybercrime, allows any definition and forces all signatories to secretly surveil their own population on request by any other signatory (think totalitarian states spying on people in democracies with no recourse)

[…] EFF colleague, Katitza Rodriguez, about the Cybercrime Treaty, which is about to pass, and which is, to put it mildly, terrifying:

https://www.eff.org/deeplinks/2024/07/un-cybercrime-draft-convention-dangerously-expands-state-surveillance-powers

Look, cybercrime is a real thing, from pig butchering to ransomware, and there’s real, global harms that can be attributed to it. Cybercrime is transnational, making it hard for cops in any one jurisdiction to handle it. So there’s a reason to think about formal international standards for fighting cybercrime.

But that’s not what’s in the Cybercrime Treaty.

Here’s a quick sketch of the significant defects in the Cybercrime Treaty.

The treaty has an extremely loose definition of cybercrime, and that looseness is deliberate. In authoritarian states like China and Russia (whose delegations are the driving force behind this treaty), “cybercrime” has come to mean “anything the government disfavors, if you do it with a computer.” “Cybercrime” can mean online criticism of the government, or professions of religious belief, or material supporting LGBTQ rights.

Nations that sign up to the Cybercrime Treaty will be obliged to help other nations fight “cybercrime” – however those nations define it. They’ll be required to provide surveillance data – for example, by forcing online services within their borders to cough up their users’ private data, or even to pressure employees to install back-doors in their systems for ongoing monitoring.

These obligations to aid in surveillance are mandatory, but much of the Cybercrime Treaty is optional. What’s optional? The human rights safeguards. Member states “should” or “may” create standards for legality, necessity, proportionality, non-discrimination, and legitimate purpose. But even if they do, the treaty can oblige them to assist in surveillance orders that originate with other states that decided not to create these standards.

When that happens, the citizens of the affected states may never find out about it. There are eight articles in the treaty that establish obligations for indefinite secrecy regarding surveillance undertaken on behalf of other signatories. That means that your government may be asked to spy on you and the people you love, they may order employees of tech companies to backdoor your account and devices, and that fact will remain secret forever. Forget challenging these sneak-and-peek orders in court – you won’t even know about them:

https://www.eff.org/deeplinks/2024/06/un-cybercrime-draft-convention-blank-check-unchecked-surveillance-abuses

Now here’s the kicker: while this treaty creates broad powers to fight things governments dislike, simply by branding them “cybercrime,” it actually undermines the fight against cybercrime itself. Most cybercrime involves exploiting security defects in devices and services – think of ransomware attacks – and the Cybercrime Treaty endangers the security researchers who point out these defects, creating grave criminal liability for the people we rely on to warn us when the tech vendors we rely upon have put us at risk.

[…]

When it comes to warnings about the defects in their own products, corporations have an irreconcilable conflict of interest. Time and again, we’ve seen corporations rationalize their way into suppressing or ignoring bug reports. Sometimes, they simply delay the warning until they’ve concluded a merger or secured a board vote on executive compensation.

Sometimes, they decide that a bug is really a feature

Note: Responsible disclosure is something people should really “get” by now.

[…]

The idea that users are safer when bugs are kept secret is called “security through obscurity” and no one believes in it – except corporate executives

[…]

The spy agencies have an official doctrine defending this reckless practice: they call it “NOBUS,” which stands for “No One But Us.” As in: “No one but us is smart enough to find these bugs, so we can keep them secret and use them attack our adversaries, without worrying about those adversaries using them to attack the people we are sworn to protect.”

NOBUS is empirically wrong.

[…]

The leak of these cyberweapons didn’t just provide raw material for the world’s cybercriminals, it also provided data for researchers. A study of CIA and NSA NOBUS defects found that there was a one-in-five chance of a bug that had been hoarded by a spy agency being independently discovered by a criminal, weaponized, and released into the wild.

[…]

A Cybercrime Treaty is a good idea, and even this Cybercrime Treaty could be salvaged. The member-states have it in their power to accept proposed revisions that would protect human rights and security researchers, narrow the definition of “cybercrime,” and mandate transparency. They could establish member states’ powers to refuse illegitimate requests from other countries:

https://www.eff.org/press/releases/media-briefing-eff-partners-warn-un-member-states-are-poised-approve-dangerou

 

Source: Pluralistic: Holy CRAP the UN Cybercrime Treaty is a nightmare (23 Jul 2024) – Pluralistic: Daily links from Cory Doctorow

Google isn’t killing third-party cookies in Chrome after all in move that surprises absolutely no-one.

Google won’t kill third-party cookies in Chrome after all, the company said on Monday. Instead, it will introduce a new experience in the browser that will allow users to make informed choices about their web browsing preferences, Google announced in a blog post. Killing cookies, Google said, would adversely impact online publishers and advertisers. This announcement marks a significant shift from Google’s previous plans to phase out third-party cookies by early 2025.

[…]

Google will now focus on giving users more control over their browsing data, Chavez wrote. This includes additional privacy controls like IP Protection in Chrome’s Incognito mode and ongoing improvements to Privacy Sandbox APIs.

Google’s decision provides a reprieve for advertisers and publishers who rely on cookies to target ads and measure performance. Over the past few years, the company’s plans to eliminate third-party cookies have been riding on a rollercoaster of delays and regulatory hurdles. Initially, Google aimed to phase out these cookies by the end of 2022, but the deadline was pushed to late 2024 and then to early 2025 due to various challenges and feedback from stakeholders, including advertisers, publishers, and regulatory bodies like the UK’s Competition and Markets Authority (CMA).

In January 2024, Google began rolling out a new feature called Tracking Protection, which restricts third-party cookies by default for 1% of Chrome users globally. This move was perceived as the first step towards killing cookies completely. However, concerns and criticism about the readiness and effectiveness of Google’s Privacy Sandbox, a collection of APIs designed to replace third-party cookies, prompted further delays.

The CMA and other regulatory bodies have expressed concerns about Google’s Privacy Sandbox, fearing it might limit competition and give Google an unfair advantage in the digital advertising market. These concerns have led to extended review periods and additional scrutiny, complicating Google’s timeline for phasing out third-party cookies. Shortly after Google’s Monday announcement, the CMA said that it was “considering the impact” of Google’s change of direction.

Source: Google isn’t killing third-party cookies in Chrome after all

Firefox’s New ‘Privacy’ Feature Actually Gives Your Data to Advertisers – How and Why to Disable Firefox’s ‘Privacy-Preserving’ Ad Measurements

Firefox finds itself in a tricky position at times, because it wants to be a privacy friendly browser, but most of its funding comes from Google, whose entire business is advertising. With Firefox 128, the browser has introduced ‘privacy-preserving ad measurement,’ which is enabled by default. Despite the name, the actual implications of the feature has users upset.

What ‘privacy-preserving ad measurement’ means

In a blog post, Firefox’s parent company Mozilla has explained that this new feature is an experiment designed to shape a web standard for advertisers, one that relies less on cookies but still tracks you in some way. Mozilla says privacy-preserving ad measurement is only being used by a handful of sites at the moment, in order to tell if their ads were successful or not.

[…]

ith privacy-preserving ad measurement, sites will be able to ask Firefox if people clicked on an ad, and if they ended up doing something the ad wanted them to (such as buying a product). Firefox doesn’t give this data directly to advertisers, but encrypts it, aggregates it, and submits it anonymously. This means that your browsing activity and other data about you is hidden from the advertiser, but they can see if their campaign delivered results or not. It’s a similar feature to those in Chrome’s Privacy Sandbox, although Google itself has run into regulatory issues implementing them.

Why you should disable this feature

Even though Mozilla’s intentions appear to be genuine, this feature should never have been enabled by default, as no matter its label, it still does technically give advertisers your data. When advertisers started tracking people online, there were no privacy protections, laws, or standards to follow, and the industry chose to track all the data that it could lay its hands on. No one ever asked users if they wanted to be tracked, or if they wanted to give advertisers access to their location, browser data, or personal preferences. If I’ve learned one thing from the way the online ad industry evolved, it’s that people should have a choice in whether their data is being tracked. Even if it seeks to replace even more invasive systems, Firefox should have offered people a choice to opt into ad measurement, instead of enabling it silently

[…]

To disable privacy-preserving ad measurement in Firefox 128, click the three-lines icon in the top-right corner in the browser. Then, go to Settings > Privacy & Security and scroll down to the Website Advertising Preferences section. There, disable Allow websites to perform privacy-preserving ad measurement.

Source: How and Why to Disable Firefox’s ‘Privacy-Preserving’ Ad Measurements | Lifehacker

Only 5 years too late: British regulators to examine Big Tech’s digital wallets – and where is the EU?

British regulators said on Monday they were looking into the soaring use of digital wallets offered by Big Tech firms, including whether there are any competition, consumer protection or market integrity concerns.
The Financial Conduct Authority and Payments Systems Regulator is seeking views on the benefits and risks, and will assess the impact digital wallets, such as Apple Pay, Google Pay and PayPal, have on competition and choice of payment options at checkout, among other things.
Digital wallets are now likely used by more than half of UK adults and have become “an increasingly important touchpoint” between Big Tech companies and UK consumers, they said in a statement.
“Digital wallets are steadily becoming a go-to payment type and while this presents exciting opportunities, there might be risks too,” said David Geale, the PSR’s managing director.
Nikhil Rathi, the FCA’s chief executive, said the growth of digital wallets represented a “seismic shift” in how people pay and regulators wanted to maximise the opportunities while “protecting against any risks this technology may present.”
Regulators and lawmakers in Europe and the United States have been examining the growing role of Big Tech in financial services.
The U.S. consumer watchdog last year proposed regulating payments and smartphone wallets, prompting criticism from the industry.
The British regulators said their review of digital wallets built on their previous work on contactless mobile payments and on the role of Big Tech firms in financial services.
After considering all feedback, the regulators provide an update on Big Tech and digital wallets by the first quarter of 2025.

Source: British regulators to examine Big Tech’s digital wallets | Reuters

Considering that people using the services generally don’t understand that they are giving their payment history to the big tech company that runs it – and is not a bank – this is way way way too late.

Dutch DPA gets off its’ ass, Fine of 600,000 euros for tracking cookies on Kruidvat.nl – detected in 2020

The Dutch Data Protection Authority (AP) has imposed a fine of 600,000 euros on the company behind the Kruidvat drugstore. Kruidvat.nl followed consumers with tracking cookies, without their knowledge or permission. AS Watson collected and used sensitive personal data from millions of website visitors against the rules.

The company behind Kruidvat collected data from website visitors and was able to create personal profiles. In addition to visitors’ location data, this included which pages they visited, which products they added to the shopping cart and purchased and which recommendations they clicked on.

That is very sensitive information, AP points out, due to the specific nature of drugstore products. Such as pregnancy tests, contraceptives or medication for all kinds of ailments. That sensitive information, linked to the location (which may be traceable via the IP address) of the unique visitor, can sketch a very specific and invasive profile of the people who visit Kruidvat.nl.

Kruidvat.nl should have asked permission to place tracking cookies on visitors’ computers. The GDPR privacy law sets a number of requirements for valid consent. These requirements are that consent must be given freely, for specific processing of personal data, on the basis of sufficient information and that there must be no doubt that consent has been given.

In the cookie banner on Kruidvat.nl, the boxes to agree to the installation of tracking software were checked by default. That’s not allowed. Visitors who still wanted to refuse the cookies had to go through many steps to achieve this. The AP has found that personal data of website visitors to Kruidvat.nl have been processed unlawfully.

At the end of 2019, the AP started an investigation into various websites, including Kruidvat.nl. The AP tested whether these websites met the requirements for placing (tracking) cookies. The AP checked whether permission for tracking cookies was asked from website visitors and, if so, how exactly this happened.

Kruidvat.nl was found not to comply in April 2020, after which the AP sent the company a letter. In 2020, the AP found that Kruidvat.nl was still not in order. The AP then started investigating this website further. This violation ended in October 2020.

There is increasing social irritation about cookies and cookie notifications, ranging from annoying and misleading banners to concerns about the secret tracking of internet users. In 2024, the AP will check more often whether websites correctly request permission for tracking cookies or other tracking software.

Source: Boete van 600.000 euro voor tracking cookies op Kruidvat.nl – Emerce

WTFBBQ?! Firefox Starts collecting personal ad preferences

In a world where so much of our lives depend on the use of online services, the web browser used to access those services becomes of crucial importance. It becomes a question of whether we trust the huge corporate interests which control this software with such access to our daily lives, and it is vital that the browser world remains a playing field with many players in the game.

The mantle has traditionally fallen upon Mozilla’s Firefox browser to represent freedom from corporate ownership, but over the last couple of years even they have edged away from their open source ethos and morphed into an advertising company that happens to have a browser. We’re asking you: can we still trust Mozilla’s Firefox, when the latest version turns on ad measurement by default?

Such has been the dominance of Google’s Chromium in the browser world, that it becomes difficult to find alternatives which aren’t based on it. We can see the attraction for developers, instead of pursuing the extremely hard task of developing a new browser engine, just use one off-the-shelf upon which someone else has already done the work. As a result, once you have discounted browsers such as the venerable Netsurf or Dillo which are cool as heck but relatively useless for modern websites, the choices quickly descend into the esoteric. There are Ladybird and Servo which are both promising but still too rough around the edges for everyday use, so what’s left? Probably LibreWolf represents the best option, a version of Firefox with a focus on privacy and security.

[…]

Source: Ask Hackaday: Has Firefox Finally Gone Too Far? | Hackaday

Many comments in the thread in the source. Definitely worth looking at.

Apple settles EU case by opening its iPhone payment system to rivals

The EU on Thursday accepted Apple’s pledge to open its “tap to pay” iPhone payment system to rivals as a way to resolve an antitrust case and head off a potentially hefty fine.

The European Commission, the EU’s executive arm and top antitrust enforcer, said it approved the commitments that Apple offered earlier this year and will make them legally binding.

Regulators had accused Apple in 2022 of abusing its dominant position by limiting access to its mobile payment technology.

Apple responded by proposing in January to allow third-party mobile wallet and payment service providers access to the contactless payment function in its iOS operating system. After Apple tweaked its proposals following testing and feedback, the commission said those “final commitments” would address its competition concerns.

“Today’s commitments end our Apple Pay investigation,” Margrethe Vestager, the commission’s executive vice-president for competition policy, told a press briefing in Brussels. “The commitments bring important changes to how Apple operates in Europe to the benefit of competitors and customers.”

Apple said in a prepared statement that it is “providing developers in the European Economic Area with an option to enable NFC [near-field communication] contactless payments and contactless transactions” for uses like car keys, corporate badges, hotel keys and concert tickets.

[…]

The EU deal promises more choice for Europeans. Vestager said iPhone users will be able to set a default wallet of their choice while mobile wallet developers will be able to use important iPhone verification functions like Face ID.

[…]

Analysts said there would be big financial incentives for companies to use their own wallets rather than letting Apple act as the middleman, resulting in savings that could trickle down to consumers. Apple charges banks 0.15% for each credit card transaction that goes through Apple Pay, according to the justice department’s lawsuit.

Apple must open up its payment system in the EU’s 27 countries plus Iceland, Norway and Liechtenstein by 25 July.

“As of this date, developers will be able to offer a mobile wallet on the iPhone with the same ‘tap-and-go’ experience that so far has been reserved for Apple Pay,” Vestager said. The changes will remain in force for a decade and will be monitored by a trustee.

Breaches of EU competition law can draw fines worth up to 10% of a company’s annual global revenue, which in Apple’s case could have amounted to tens of billions of euros.

“The main advantage to the issuer bank of supporting an alternative to Apple Pay via iPhone is the reduction in fees incurred, which can be substantial,” said Philip Benton, a principal analyst at research and advisory firm Omdia. To encourage iPhone users to switch away from Apple Pay to another mobile wallet, “the fee reduction needs to be partially passed onto the consumer” through benefits like cashback or loyalty rewards, he said.

Banks and consumers could also benefit in other ways.

If companies use their own apps for tap-and-go payments, they would get “full visibility” of their customers’ transactions, said Ben Wood, chief analyst at CCS Insight. That data would allow them to “build brand loyalty and trust and offer more personalised services, rewards and promotions directly to the user”, he said.

Source: Apple settles EU case by opening its iPhone payment system to rivals | Apple | The Guardian

Note: Currently, Apple has this full visibility of your transactions. Are you sure you want to trust a company like that with your financial data?

I wonder how childishly Apple will handle this, considering how it has gone about “opening up” it’s app store and allowing home screen apps (not really at all)

Why all Chromium browsers tell Google about your CPU, GPU usage? A whitewashing bullshit explanation.

Running a Chromium-based browser, such as Google Chrome or Microsoft Edge? The chances are good it’s quietly telling Google all about your CPU and GPU usage when you visit one of the search giant’s websites.

The feature is, from what we can tell, for performance monitoring and not really for tracking – Google knows who you are and what you’re doing anyway when you’re logged into and using its sites – but it does raise some antitrust concerns in light of Europe’s competition-fostering Digital Markets Act (DMA).

When visiting a *.google.com domain, the Google site can use the API to query the real-time CPU, GPU, and memory usage of your browser, as well as info about the processor you’re using, so that whatever service is being provided – such as video-conferencing with Google Meet – could, for instance, be optimized and tweaked so that it doesn’t overly tax your computer. The functionality is implemented as an API provided by an extension baked into Chromium – the browser brains primarily developed by Google and used in Chrome, Edge, Opera, Brave, and others.

Non-Chromium-based browsers – such as Mozilla’s Firefox – don’t have that extension, which puts them at a potential disadvantage. Without the API, they may offer a worse experience on Google sites than what’s possible on the same hardware with Google’s own browser, because they can’t provide that live performance info.

There is, however, nothing technically stopping Moz or other browser-engine makers implementing a similar extension itself in Firefox, if they so chose.

Crucially though, websites that compete against Google can’t access the Chromium API. This is where technical solutions start to look potentially iffy in the eyes of Europe’s DMA.

Netherlands-based developer Luca Casonato highlighted the extension’s existence this week on social media, and his findings went viral – with millions of views. We understand at least some people have known about the code for a while now – indeed, it’s all open source and can be found here in the preinstalled extension hangout_services.

That name should give you a clue to its origin. It was developed last decade to provide browser-side functionality to Google Hangouts – a product that got split into today’s Google Meet and Chat. Part of that functionality is logging for Google, upon request, stats about your browser’s use of your machine’s compute resources when visiting a *.google.com domain – such as meet.google.com.

Casonato noted that the extension can’t be disabled in Chrome, at least, and it doesn’t show up in the extension panel. He observed it’s also included in Microsoft Edge and Brave, both of which are Chromium based. We reached out to Casonato for more of his thoughts on this – though given the time differences between him in Europe and your humble vulture in the US, we didn’t immediately hear back.

Explanation

If you’ve read this far there’s probably an obvious question on your mind: What’s to say this API is malicious? We’re not saying that, and neither is Casonato. Google isn’t saying that either.

“Today, we primarily use this extension for two things: To improve the user experience by optimizing configurations for video and audio performance based on system capabilities [and] provide crash and performance issue reporting data to help Google services detect, debug, and mitigate user issues,” a Google spokesperson told us on Thursday.

“Both are important for the user experience and in both cases we follow robust data handling practices designed to safeguard user privacy,” the spokesperson added.

As we understand it, Google Meet today uses the old Hangouts extension to, for one thing, vary the quality of the video stream if the current resolution is proving too much for your PC. Other Google sites are welcome to use the thing, too.

That all said, the extension’s existence could be harmful to competition as far as the EU is concerned – and that seems to be why Casonato pointed it out this week.

Source: Why Chromium tells Google sites about your CPU, GPU usage • The Register

A lovely explanation, but the fact remains that chromium is sending personal information to a central company: Google, without informing users at all. This blanket explanation could be used to whitewash any information they send through Chromium: the contents of your memory? Improving user experience. The position of your mouse on websites? Improving user experience. It just does not wash.

Data breach exposes millions of mSpy spyware customer support tickets

Unknown attackers stole millions of customer support tickets, including personal information, emails to support, and attachments, including personal documents, from mSpy in May 2024. While hacks of spyware purveyors are becoming increasingly common, they remain notable because of the highly sensitive personal information often included in the data, in this case about the customers who use the service.

The hack encompassed customer service records dating back to 2014, which were stolen from the spyware maker’s Zendesk-powered customer support system.

mSpy is a phone surveillance app that promotes itself as a way to track children or monitor employees. Like most spyware, it is also widely used to monitor people without their consent. These kinds of apps are also known as “stalkerware” because people in romantic relationships often use them to surveil their partner without consent or permission.

The mSpy app allows whoever planted the spyware, typically someone who previously had physical access to a victim’s phone, to remotely view the phone’s contents in real-time.

As is common with phone spyware, mSpy’s customer records include emails from people seeking help to surreptitiously track the phones of their partners, relatives, or children, according to TechCrunch’s review of the data, which we independently obtained. Some of those emails and messages include requests for customer support from several senior-ranking U.S. military personnel, a serving U.S. federal appeals court judge, a U.S. government department’s watchdog, and an Arkansas county sheriff’s office seeking a free license to trial the app.

Even after amassing several million customer service tickets, the leaked Zendesk data is thought to represent only the portion of mSpy’s overall customer base who reached out for customer support. The number of mSpy customers is likely to be far higher.

Yet more than a month after the breach, mSpy’s owners, a Ukraine-based company called Brainstack, have not acknowledged or publicly disclosed the breach.

Troy Hunt, who runs data breach notification site Have I Been Pwned, obtained a copy of the full leaked dataset, adding about 2.4 million unique email addresses of mSpy customers to his site’s catalog of past data breaches.

[…]

Some of the email addresses belong to unwitting victims who were targeted by an mSpy customer. The data also shows that some journalists contacted the company for comment following the company’s last known breach in 2018. And, on several occasions, U.S. law enforcement agents filed or sought to file subpoenas and legal demands with mSpy. In one case following a brief email exchange, an mSpy representative provided the billing and address information about an mSpy customer — an alleged criminal suspect in a kidnapping and homicide case — to an FBI agent.

Each ticket in the dataset contained an array of information about the people contacting mSpy. In many cases, the data also included their approximate location based on the IP address of the sender’s device.

[…]

The emails in the leaked Zendesk data show that mSpy and its operators are acutely aware of what customers use the spyware for, including monitoring of phones without the person’s knowledge. Some of the requests cite customers asking how to remove mSpy from their partner’s phone after their spouse found out. The dataset also raises questions about the use of mSpy by U.S. government officials and agencies, police departments, and the judiciary, as it is unclear if any use of the spyware followed a legal process.

[…]

This is the third known mSpy data breach since the company began in around 2010. mSpy is one of the longest-running phone spyware operations, which is in part how it accumulated so many customers.

[…]

the data breach of mSpy’s Zendesk data exposed its parent company as a Ukrainian tech company called Brainstack.

[…]

Source: Data breach exposes millions of mSpy spyware customers | TechCrunch

Why You Should Consider Proton Docs Over Google

Proton has officially launched Docs in Proton Drive, a new web-based productivity app that gives you access to a fully-featured text editor with shared editing capabilities and full end-to-end encryption. It’s meant to take on Google Docs—one of the leading online word processors in the world, and make it more convenient to use Proton’s storage service. But how exactly does Proton’s document editor compare to Google’s? Here’s what you need to know.

Docs in Proton Drive has a familiar face

On the surface, Docs in Proton Drive—or Proton Docs as some folks have begun calling it for simplicity’s sake—looks just like Google Docs. And that’s to be expected. Text editors don’t have much reason to stray from the same basic “white page with a bunch of toolbars” look, and they all offer the same types of tools like headlines, bullet points, font changes, highlighting, etc.

[…]

The difference isn’t in the app itself

[…]

Proton has built its entire business around the motto of “privacy first,” and that extends to the company’s latest software offerings, too. Docs in Proton Drive includes complete end-to-end encryption—down to your cursor movements—which means nobody, not even Proton, can track what you’re doing in your documents. They’re locked down before they even reach Proton’s servers.

This makes the product very enticing for businesses that might want to keep their work as private as possible while also still having the same functionality as Google Docs—because Proton isn’t missing any of the functionality that Google Docs offers, aside from the way that Google Docs integrates with the rest of the Google Suite of products.

That’s not to say that Google isn’t secure. Google does utilize its own level of encryption when storing your data in the cloud. However, it isn’t completely end-to-end encrypted, so Google has open access to your data. Google says it only trains its generative AI on “publicly accessible” information, and while that probably won’t affect most people, it is a pain point for many, especially as the company does make exceptions for features like Smart Compose.

That worry is why products with end-to-end encryption have become such a commodity in recent years—especially as cybersecurity risks continue to rise, meaning you have to trust the companies who store your data even more. Proton’s advantage is that it promises to NEVER use your content for any purpose—and those aren’t empty words. Because the company doesn’t have access to your content, it couldn’t use it even if it wanted to.

[…]

Source: Why You Should Consider Proton Docs Over Google | Lifehacker

Proton Docs is a privacy-focused answer to Google Docs and Microsoft Word

Proton Docs looks a lot like Google Docs: white pages, formatting toolbar at the top, live indicators showing who’s in the doc with their name attached to a cursor, the whole deal. That’s not especially surprising, for a couple of reasons. First, Google Docs is hugely popular, and there are only so many ways to style a document editor anyway. Second, Proton Docs exists in large part to be all the things that are great about Google Docs — just without Google in the mix.

Docs is launching today inside of Proton Drive, as the latest app in Proton’s privacy-focused suite of work tools. The company that started as an email client now also includes a calendar, a file storage system, a password manager, and more. Adding Docs to the ecosystem makes sense for Proton as it tries to compete with Microsoft Office and Google Workspace and seemed to be clearly coming soon after Proton acquired Standard Notes in April. Standard Notes isn’t going away, though, Proton PR manager Will Moore tells me — it’s just that Docs is borrowing some features.

The first version of Proton Docs seems to have most of what you’d expect in a document editor: rich text options, real-time collaborative editing, and multimedia support. (If Proton can handle image embeds better than Google, it might have a hit on its hands just for that.) It’s web-only and desktop-optimized for now, though Moore tells me it’ll eventually come to other platforms. “Everything that Google’s got is on our roadmap,” he says.

A screenshot of multiple editors in Proton Docs.
Imagine Google Docs… there, that’s it. You know what Proton Docs looks like.Image: Proton

Since this is a Proton product, security is everything: the company says every document, keystroke, and even cursor movement is end-to-end encrypted in real time. Proton has long promised to never sell or otherwise use your user data

[…]

Source: Proton Docs is a privacy-focused answer to Google Docs and Microsoft Word – The Verge

Spain introduces porn passport – really wants to know what you are watching and especially how often erm… no… *cough* to stop kids from watching smut

The Spanish government has a plan to prevent kids from watching porn online: Meet the porn passport.

Officially (and drily) called the Digital Wallet Beta (Cartera Digital Beta), the app Madrid unveiled on Monday would allow internet platforms to check whether a prospective smut-watcher is over 18. Porn-viewers will be asked to use the app to verify their age. Once verified, they’ll receive 30 generated “porn credits” with a one-month validity granting them access to adult content. Enthusiasts will be able to request extra credits.

While the tool has been criticized for its complexity, the government says the credit-based model is more privacy-friendly, ensuring that users’ online activities are not easily traceable.

The system will be available by the end of the summer. It will be voluntary, as online platforms can rely on other age-verification methods to screen out inappropriate viewers. It heralds an EU law going into force in October 2027, which will require websites to stop minors from accessing porn.

Eventually, Madrid’s porn passport is likely to be replaced by the EU’s very own digital identity system (eIDAS2) — a so-called wallet app allowing people to access a smorgasbord of public and private services across the whole bloc.

“We are acting in advance and we are asking platforms to do so too, as what is at stake requires it,” José Luis Escrivá, Spain’s digital secretary, told Spanish newspaper El País.

Source: Spain introduces porn passport to stop kids from watching smut – POLITICO

Every time they mention kids, have a really good look at how much more they are spying on you and controlling your actions.

EU’s ‘Going Dark’ Expert Group Publishes 42-Point Surveillance Plan For Access To All Devices And Data At All Times

Techdirt has been covering the disgraceful attempts by the EU to break end-to-end encryption — supposedly in order to “protect the children” — for two years now. An important vote that could have seen EU nations back the proposal was due to take place recently. The vote was cancelled — not because politicians finally came to their senses, but the opposite. Those backing the new law were worried the latest draft might not be approved, and so removed it from the agenda, to allow a little more backroom persuasion to be applied to holdouts.

Although this “chat control” law has been the main focus of the EU’s push for more surveillance of innocent citizens, it is by no means the end of it. As the German digital rights site Netzpolitik reports, work is already underway on further measures, this time to address the non-existent “going dark” threat to law enforcement:

The group of high-level experts had been meeting since last year to tackle the so-called „going dark“ problem. The High-Level Group set up by the EU was characterized by a bias right from the start: The committee is primarily made up of representatives of security authorities and therefore represents their perspective on the issue.

Given the background and bias of the expert group, it’s no surprise that its report, “Recommendations from the High-Level Group on Access to Data for Effective Law Enforcement”, is a wish-list of just about every surveillance method. The Pirate Party Member of the European Parliament Patrick Breyer has a good summary of what the “going dark” group wants:

according to the 42-point surveillance plan, manufacturers are to be legally obliged to make digital devices such as smartphones, smart homes, IoT devices, and cars monitorable at all times (“access by design”). Messenger services that were previously securely encrypted are to be forced to allow for interception. Data retention, which was overturned by the EU Court of Justice, is to be reenacted and extended to OTT internet communications services such as messenger services. “At the very least”, IP connection data retention is to be required to be able to track all internet activities. The secure encryption of metadata and subscriber data is to be prohibited. Where requested by the police, GPS location tracking should be activated by service providers (“tracking switch”). Uncooperative providers are to be threatened with prison sentences.

It’s an astonishing list, not least for the re-appearance of data retention, which was thrown out by the EU’s highest court in 2014. It’s a useful reminder that even when bad laws are overturned, constant vigilance is required to ensure that they don’t come back at a later date.

Source: EU’s ‘Going Dark’ Expert Group Publishes 42-Point Surveillance Plan For Access To All Devices And Data At All Times | Techdirt

These people don’t seem to realise that opening this stuff up for law enforcement (who do misuse their powers), also opens it up to criminals.

Windows 11 is now automatically enabling OneDrive folder backup without asking permission

Microsoft has made OneDrive slightly more annoying for Windows 11 users. Quietly and without any announcement, the company changed Windows 11’s initial setup so that it could turn on the automatic folder backup without asking for it.

Now, those setting up a new Windows computer the way Microsoft wants them to (in other words, connected to the internet and signed into a Microsoft account) will get to their desktops with OneDrive already syncing stuff from folders like Desktop Pictures, Documents, Music, and Videos. Depending on how much is stored there, you might end up with a desktop and other folders filled to the brim with shortcuts to various stuff right after finishing a clean Windows installation.

Automatic folder backup in OneDrive is a very useful feature when used properly and when the user deliberately enables it. However, Microsoft decided that sending a few notification prompts to enable folder backup was not enough, so it just turned the feature on without asking anybody or even letting users know about it, resulting in a flood of Reddit posts about users complaining about what the hell are those green checkmarks next to files and shortcuts on their desktops.

If you do not want your computer to back up everything on your desktop or other folders, here is how to turn the feature off (you can also set up Windows 11 in offline mode):

  1. Right-click the OneDrive icon in the tray area, click the settings icon and then press Settings.
  2. Go to the “Sync and Backup” tab and click “Manage backup.”
  3. Turn off all the folders you do not want to back up in OneDrive and confirm the changes.
  4. If you have an older OneDrive version with the classic tabbed interface, go to the Backup tab and click Manage Backup > Stop backup > Stop backup.

Microsoft is no stranger to shady tricks with its software and operating system. Several months ago, we noticed that OneDrive would not let you close it without you explaining the reason first (Microsoft later reverted that stupid change). A similar thing was also spotted in the Edge browser, with Microsoft asking users why they downloaded Chrome.

As a reminder, you can always just uninstall OneDrive and call it a day.

Source: Windows 11 is now automatically enabling OneDrive folder backup without asking permission – Neowin

Microsoft Account to local account conversion guide erased from official Windows 11 guide

Microsoft has been pushing hard for its users to sign into Windows with a Microsoft Account. The newest Windows 11 installer removed the easy bypass to the requirement that you make an account or login with your existing account. If you didn’t install Windows 11 without a Microsoft Account and now want to stop sending the company your data, you can still switch to a local account after the fact. Microsoft even had instructions on how to do this on its official support website – or at least it used to…

Microsoft’s ‘Change from a local account to a Microsoft Account’ guide shows users how they can change their Windows 11 PC login credentials to use their Microsoft Account. The company also supplied instructions on how to ‘Change from a Microsoft account to a local account’ on the same page. However, when we checked the page using the Wayback Machine, the instructions on how to do the latter appeared on June 12, 2024, then disappeared on June 17, 2024. The ‘Change from a Microsoft account to a local account’ instructions yet haven’t returned.

Converting your Windows 11 PC’s login from a Microsoft Account to a local account is a pretty simple process. All you have to do is go to the Settings app, proceed to Accounts > Your info, and select “Sign in with a local account instead.” Follow the instructions on the screen, and you should be good to go.

[…]

It’s apparent that Microsoft really wants users to sign up and use their services, much like how Google and Apple make you create an account so you can make full use of your Android or iDevice. While Windows 11 still lets you use the OS with a local account, these developments show that Microsoft wants this option to be inaccessible, at least for the average consumer.

Source: Microsoft Account to local account conversion guide erased from official Windows 11 guide — instructions redacted earlier this week | Tom’s Hardware

EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat

Automated license plate readers “pose risks to public safety,” argues the EFF, “that may outweigh the crimes they are attempting to address in the first place.” When law enforcement uses automated license plate readers (ALPRs) to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats. The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and “fingerprinting” their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions’ Vigilant ALPRs, including missing encryption and insufficiently protected credentials…

Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems… Because drivers don’t have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology. It’s a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that public safety agencies “are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel,” even though “the potential for harm from external factors is substantial.” That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks “targeting U.S. public safety organizations increased by 142 percent” in 2023.

Yet, the temptation to “collect it all” continues to overshadow the responsibility to “protect it all.” What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs… If there’s one positive thing we can say about the latest Vigilant vulnerability disclosures, it’s that for once a government agency identified and reported the vulnerabilities before they could do damage… The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities…

But a data breach isn’t the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife.

The article concludes that public safety agencies should “collect only the data they need for actual criminal investigations.

“They must never store more data than they adequately protect within their limited resources-or they must keep the public safe from data breaches by not collecting the data at all.”

Source: EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat

EU delays decision over continuous spying on all your devices *cough* scanning encrypted messages for kiddie porn

European Union officials have delayed talks over proposed legislation that could lead to messaging services having to scan photos and links to detect possible child sexual abuse material (CSAM). Were the proposal to become law, it may require the likes of WhatsApp, Messenger and Signal to scan all images that users upload — which would essentially force them to break encryption.

For the measure to pass, it would need to have the backing of at least 15 of the member states representing at least 65 percent of the bloc’s entire population. However, countries including Germany, Austria, Poland, the Netherlands and the Czech Republic were expected to abstain from the vote or oppose the plan due to cybersecurity and privacy concerns, Politico reports. If EU members come to an agreement on a joint position, they’ll have to hash out a final version of the law with the European Commission and European Parliament.

The legislation was first proposed in 2022 and it could result in messaging services having to scan all images and links with the aim of detecting CSAM and communications between minors and potential offenders. Under the proposal, users would be informed about the link and image scans in services’ terms and conditions. If they refused, they would be blocked from sharing links and images on those platforms. However, as Politico notes, the draft proposal includes an exemption for “accounts used by the State for national security purposes.”

[…]

Patrick Breyer, a digital rights activist who was a member of the previous European Parliament before this month’s elections, has argued that proponents of the so-called “chat control” plan aimed to take advantage of a power vacuum before the next parliament is constituted. Breyer says that the delay of the vote, prompted in part by campaigners, “should be celebrated,” but warned that “surveillance extremists among the EU governments” could again attempt to advance chat control in the coming days.

Other critics and privacy advocates have slammed the proposal. Signal president Meredith Whittaker said in a statement that “mass scanning of private communications fundamentally undermines encryption,” while Edward Snowden described it as a “terrifying mass surveillance measure.”

[…]

The EU is not the only entity to attempt such a move. In 2021, Apple revealed a plan to scan iCloud Photos for known CSAM. However, it scrapped that controversial effort following criticism from the likes of customers, advocacy groups and researchers.

Source: EU delays decision over scanning encrypted messages for CSAM

Watch out very very carefully  as soon as people start taking your freedoms in the name of “protecting children”.

FedEx’s Secretive Police Force Is Helping Cops Build An AI Car Surveillance Network

[…] Forbes has learned the shipping and business services company is using AI tools made by Flock Safety, a $4 billion car surveillance startup, to monitor its distribution and cargo facilities across the United States. As part of the deal, FedEx is providing its Flock surveillance feeds to law enforcement, an arrangement that Flock has with at least four multi-billion dollar private companies. But publicly available documents reveal that some local police departments are also sharing their Flock feeds with FedEx — a rare instance of a private company availing itself of a police surveillance apparatus.

To civil rights activists, such close collaboration has the potential to dramatically expand Flock’s car surveillance network, which already spans 4,000 cities across over 40 states and some 40,000 cameras that track vehicles by license plate, make, model, color and other identifying characteristics, like dents or bumper stickers. Lisa Femia, staff attorney at the Electronic Frontier Foundation, said because private entities aren’t subject to the same transparency laws as police, this sort of arrangement could “[leave] the public in the dark, while at the same time expanding a sort of mass surveillance network.”

[…]

It’s unclear just how widely law enforcement is sharing Flock data with FedEx. According to publicly available lists of data sharing partners, two police departments have granted the FedEx Air Carrier Police Department access to their Flock cameras: Shelby County Sheriff’s Office in Tennessee and Pittsboro Police Department in Indiana.

Shelby County Sheriff’s Office public information officer John Morris confirmed the collaboration. “We share reads from our Flock license plate readers with FedEx in the same manner we share the data with other law enforcement agencies, locally, regionally, and nationally,” he told Forbes via email.

[…]

FedEx is also sharing its Flock camera feeds with other police departments, including the Greenwood Police Department in Indiana, according to Matthew Fillenwarth, assistant chief at the agency. Morris at Shelby County Sheriff’s Office confirmed his department had access to FedEx’s Flock feeds too. Memphis Police Department said it received surveillance camera feeds from FedEx through its Connect Memphis system

[…]

Flock, which was founded in 2017, has raised more than $482 million in venture capital investment from the likes of Andreessen Horowitz, helping it expand its vast network of cameras across America through both public police department contracts and through more secretive agreements with private businesses.

Forbes has now uncovered at least four corporate giants using Flock, none of which had publicly disclosed contracts with the surveillance startup. As Forbes previously reported, $50 billion-valued Simon Property, the country’s biggest mall owner, and home improvement giant Lowe’s, are two of the biggest clients. Like FedEx, Simon Property also has provided its mall feeds to local cops.

[…]

Kaiser Permanente, the largest health insurance company in America, has shared Flock data with the Northern California Regional Intelligence Center, an intelligence hub that provides support to local and federal police investigating major crimes across California’s west coast

[…]

Flock’s senior vice president of policy and communications Joshua Thomas declined to comment on private customers. “Flock’s technology and tools help our customers bolster their public safety efforts by helping to deter and solve crime efficiently and objectively,” Thomas said. “Objective video evidence is crucial to solving crime and we support our customers sharing that evidence with those that they are legally allowed to do so with.”

He said Flock was helping to solve “thousands of crimes nationwide” and is working toward its “goal of leveraging technology to eliminate crime.” Forbes previously found that Flock’s marketing data had exaggerated its impact on crime rates and that the company had itself likely broken the law across various states by installing cameras without the right permits.

Source: FedEx’s Secretive Police Force Is Helping Cops Build An AI Car Surveillance Network

Signal, MEPs urge EU Council to drop law that puts a spy on everyone’s devices

On Thursday, the EU Council is scheduled to vote on a legislative proposal that would attempt to protect children online by disallowing confidential communication.

The vote had been set for Wednesday but got pushed back [PDF].

Known to detractors as Chat Control, the proposal seeks to prevent the online dissemination of child sexual abuse material (CSAM) by requiring internet service providers to scan digital communication – private chats, emails, social media messages, and photos – for unlawful content.

The proposal [PDF], recognizing the difficulty of explicitly outlawing encryption, calls for “client-side scanning” or “upload moderation” – analyzing content on people’s mobile devices and computers for certain wrongdoing before it gets encrypted and transmitted.

The idea is that algorithms running locally on people’s devices will reliably recognize CSAM (and whatever else is deemed sufficiently awful), block it, and/or report it to authorities. This act of automatically policing and reporting people’s stuff before it’s even had a chance to be securely transferred rather undermines the point of encryption in the first place.

We’ve been here before. Apple announced plans to implement a client-side scanning scheme back in August 2021, only to face withering criticism from the security community and civil society groups. In late 2021, the iGiant essentially abandoned the idea.

Europe’s planned “regulation laying down rules to prevent and combat child sexual abuse” is not the only legislative proposal that contemplates client-side scanning as a way to front-run the application of encryption. The US Earn-It Act imagines something similar.

In the UK, the Online Safety Act of 2023 includes a content scanning requirement, though with the government’s acknowledgement that enforcement isn’t presently feasible. While it does allow telecoms regulator Ofcom to require online platforms to adopt an “accredited technology” to identify unlawful content, there is currently no such technology and it’s unclear how accreditation would work.

With the EU proposal vote approaching, opponents of the plan have renewed their calls to shelve the pre-crime surveillance regime.

In an open letter [PDF] on Monday, Meredith Whittaker, CEO of Signal, which threatened to withdraw its app from the UK if the Online Safety Act disallowed encryption, reiterated why the EU client-side scanning plan is unworkable and dangerous.

“There is no way to implement such proposals in the context of end-to-end encrypted communications without fundamentally undermining encryption and creating a dangerous vulnerability in core infrastructure that would have global implications well beyond Europe,” wrote Whittaker.

European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label

“Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games.

“They’ve come back to the table with the same idea under a new label. Instead of using the previous term ‘client-side scanning,’ they’ve rebranded and are now calling it ‘upload moderation.’

“Some are claiming that ‘upload moderation’ does not undermine encryption because it happens before your message or video is encrypted. This is untrue.”

The Internet Architecture Board, part of the Internet Engineering Task Force, offered a similar assessment of client-side scanning in December.

Encrypted comms service Threema published its open variation on this theme on Monday, arguing that mass surveillance is incompatible with democracy, is ineffective, and undermines data security.

“Should it pass, the consequences would be devastating: Under the pretext of child protection, EU citizens would no longer be able to communicate in a safe and private manner on the internet,” the biz wrote.

EU citizens would no longer be able to communicate in a safe and private manner on the internet

“The European market’s location advantage would suffer a massive hit due to a substantial decrease in data security. And EU professionals like lawyers, journalists, and physicians could no longer uphold their duty to confidentiality online. All while children wouldn’t be better protected in the least bit.”

Threema said if it isn’t allowed to offer encryption, it will leave the EU.

And on Tuesday, 37 Members of Parliament signed an open letter to the Council of Europe urging legislators to reject Chat Control.

“We explicitly warn that the obligation to systematically scan encrypted communication, whether called ‘upload-moderation’ or ‘client-side scanning,’ would not only break secure end-to-end encryption, but will to a high probability also not withstand the case law of the European Court of Justice,” the MEPs said. “Rather, such an attack would be in complete contrast to the European commitment to secure communication and digital privacy, as well as human rights in the digital space.” ®

Source: Signal, MEPs urge EU Council to drop encryption-eroding law • The Register

Hey, EU, stop spying on us! We are supposed to be the free ones here.

Sonos draws more customer anger — this time for its privacy policy. Now they will sell your customer data, apparently

It’s been a rocky couple of months for Sonos — so much so that CEO Patrick Spence now has a canned autoreply for customers emailing him to vent about the redesigned app. But as the company works to right the ship, restore trust, and get the new Sonos Ace headphones off to a strong start, it finds itself in the middle of yet another controversy.

As highlighted by repair technician and consumer privacy advocate Louis Rossmann, Sonos has made a significant change to its privacy policy, at least in the United States, with the removal of one key line. The updated policy no longer contains a sentence that previously said, “Sonos does not and will not sell personal information about our customers.” That pledge is still present in other countries, but it’s nowhere to be found in the updated US policy, which went into effect earlier this month.

Now, some customers, already feeling burned by the new Sonos app’s unsteady performance, are sounding off about what they view as another poor decision from the company’s leadership. For them, it’s been one unforced error after another from a brand they once recommended without hesitation.

[…]

As part of its reworked app platform, Sonos rolled out web-based access for all customer systems — giving the cloud an even bigger role in the company’s architecture. Unfortunately, the web app currently lacks any kind of two-factor authentication, which has also irked users; all it takes is an email address and password to remotely control Sonos devices.

[…]

Source: Sonos draws more customer anger — this time for its privacy policy – The Verge

If I had an “idiocy” tag, I would have used it for these bozo’s.

Google Leak Reveals Thousands of Privacy Incidents

Google has accidentally collected childrens’ voice data, leaked the trips and home addresses of car pool users, and made YouTube recommendations based on users’ deleted watch history, among thousands of other employee-reported privacy incidents, according to a copy of an internal Google database which tracks six years worth of potential privacy and security issues obtained by 404 Media. From the report: Individually the incidents, most of which have not been previously publicly reported, may only each impact a relatively small number of people, or were fixed quickly. Taken as a whole, though, the internal database shows how one of the most powerful and important companies in the world manages, and often mismanages, a staggering amount of personal, sensitive data on people’s lives.

The data obtained by 404 Media includes privacy and security issues that Google’s own employees reported internally. These include issues with Google’s own products or data collection practices; vulnerabilities in third party vendors that Google uses; or mistakes made by Google staff, contractors, or other people that have impacted Google systems or data. The incidents include everything from a single errant email containing some PII, through to substantial leaks of data, right up to impending raids on Google offices. When reporting an incident, employees give the incident a priority rating, P0 being the highest, P1 being a step below that. The database contains thousands of reports over the course of six years, from 2013 to 2018. In one 2016 case, a Google employee reported that Google Street View’s systems were transcribing and storing license plate numbers from photos. They explained that Google uses an algorithm to detect text in Street View imagery.

Source: https://tech.slashdot.org/story/24/06/03/1655212/google-leak-reveals-thousands-of-privacy-incidents?utm_source=rss1.0mainlinkanon&utm_medium=feed

Top EU court says there is no right to online anonymity, because copyright is more important

A year ago, Walled Culture wrote about an extremely important case that was being considered by the Court of Justice of the European Union (CJEU), the EU’s top court. The central question was whether the judges considered that copyright was more important than privacy. The bad news is that the CJEU has just decided that it is:

The Court, sitting as the Full Court, holds that the general and indiscriminate retention of IP addresses does not necessarily constitute a serious interference with fundamental rights.

IP addresses refer to the identifying Internet number assigned to a user’s system when it is online. That may change each time someone uses the Internet, but if Internet Service Providers are required by law to retain information about who was assigned a particular address at a given time, then it is possible to carry out routine surveillance of people’s online activities. The CJEU has decided this is acceptable:

EU law does not preclude national legislation authorising the competent public authority, for the sole purpose of identifying the person suspected of having committed a criminal offence, to access the civil identity data associated with an IP address

The key problem is that copyright infringement by a private individual is regarded by the court as something so serious that it negates the right to privacy. It’s a sign of the twisted values that copyright has succeeded on imposing on many legal systems. It equates the mere copying of a digital file with serious crimes that merit a prison sentence, an evident absurdity.

As one of the groups that brought the original case, La Quadrature du Net, writes, this latest decision also has serious negative consequences for human rights in the EU:

Whereas in 2020, the CJEU considered that the retention of IP addresses constituted a serious interference with fundamental rights and that they could only be accessed, together with the civil identity of the Internet user, for the purpose of fighting serious crime or safeguarding national security, this is no longer true. The CJEU has reversed its reasoning: it now considers that the retention of IP addresses is, by default, no longer a serious interference with fundamental rights, and that it is only in certain cases that such access constitutes a serious interference that must be safeguarded with appropriate protection measures.

As a result, La Quadrature du Net says:

While in 2020 [the CJEU] stated that there was a right to online anonymity enshrined in the ePrivacy Directive, it is now abandoning it. Unfortunately, by giving the police broad access to the civil identity associated with an IP address and to the content of a communication, it puts a de facto end to online anonymity.

This is a good example of how copyright’s continuing obsession with ownership and control of digital material is warping the entire legal system in the EU. What was supposed to be simply a fair way of rewarding creators has resulted in a monstrous system of routine government surveillance carried out on hundreds of millions of innocent people just in case they copy a digital file.

Source: Top EU court says there is no right to online anonymity, because copyright is more important – Walled Culture

FCC fines America’s largest wireless carriers $200 million for selling customer location data without permission

The Federal Communications Commission has slapped the largest mobile carriers in the US with a collective fine worth $200 million for selling access to their customers’ location information without consent. AT&T was ordered to pay $57 million, while Verizon has to pay $47 million. Meanwhile, Sprint and T-Mobile are facing a penalty with a total amount of $92 million together, since the companies had merged two years ago. The FCC conducted an in-depth investigation into the carriers’ unauthorized disclosure and sale of subscribers’ real-time location data after their activities came to light in 2018.

To sum up the practice in the words of FCC Commissioner Jessica Rosenworcel: The carriers sold “real-time location information to data aggregators, allowing this highly sensitive data to wind up in the hands of bail-bond companies, bounty hunters, and other shady actors.” According to the agency, the scheme started to unravel following public reports that a sheriff in Missouri was tracking numerous individuals by using location information a company called Securus gets from wireless carriers. Securus provides communications services to correctional facilities in the country.

While the carriers eventually ceased their activities, the agency said they continued operating their programs for a year after the practice was revealed and after they promised the FCC that they would stop selling customer location data. Further, they carried on without reasonable safeguards in place to ensure that the legitimate services using their customers’ information, such as roadside assistance and medical emergency services, truly are obtaining users’ consent to track their locations.

Source: FCC fines America’s largest wireless carriers $200 million for selling customer location data

Helldivers 2 PC players suddenly have to link to a PSN account and they’re not being chill about it

Nintendo sent a Digital Millennium Copyright Act (DMCA) notice for over 8,000 GitHub repositories hosting code from the Yuzu Switch emulator, which the Zelda maker previously described as enabling “piracy at a colossal scale.” The sweeping takedown comes two months after Yuzu’s creators quickly settled a lawsuit with Nintendo and its notoriously trigger-happy legal team for $2.4 million.

GamesIndustry.biz first reported on the DMCA notice, affecting 8,535 GitHub repos. Redacted entities representing Nintendo assert that the Yuzu source code contained in the repos “illegally circumvents Nintendo’s technological protection measures and runs illegal copies of Switch games.”

GitHub wrote on the notice that developers will have time to change their content before it’s disabled. In keeping with its developer-friendly approach and branding, the Microsoft-owned platform also offered legal resources and guidance on submitting DMCA counter-notices.

Nintendo’s legal blitz, perhaps not coincidentally, comes as game emulators are enjoying a resurgence. Last month, Apple loosened its restrictions on retro game players in the App Store (likely in response to regulatory threats), leading to the Delta emulator establishing itself as the de facto choice and reaching the App Store’s top spot. Nintendo may have calculated that emulators’ moment in the sun threatened its bottom line and began by squashing those that most immediately imperiled its income stream.

Sadly, Nintendo’s largely undefended legal assault against emulators ignores a crucial use for them that isn’t about piracy. Game historians see the software as a linchpin of game preservation. Without emulators, Nintendo and other copyright holders could make a part of history obsolete for future generations, as their corresponding hardware will eventually be harder to come by.

[…]

This has royally pissed off PC players, though it’s worth noting that it’s free to make a PSN account. This has led to review bombing on Steam and many promises to abandon the game when the linking becomes a requirement, according to a report by Kotaku. The complaints range from frustration over adding yet another barrier to entry after downloading an 80GB game to fears that the PSN account would likely be hacked. While it is true that Sony was the target of a huge hack that impacted 77 million PSN accounts, that was back in 2011. Obama was still in his first term. Also worth noting? Steam was hacked in 2011, impacting 35 million accounts.

[…]

Source: Helldivers 2 PC players suddenly have to link to a PSN account and they’re not being chill about it

People Are Slowly Realizing Their Auto Insurance Rates Are Skyrocketing Because Their Car Is Covertly Spying On Them

Last month the New York Times’ Kashmir Hill published a major story on how GM collects driver behavior data then sells access (through LexisNexis) to insurance companies, which will then jack up your rates.

The absolute bare minimum you could could expect from the auto industry here is that they’re doing this in a way that’s clear to car owners. But of course they aren’t; they’re burying “consent” deep in the mire of some hundred-page end user agreement nobody reads, usually not related to the car purchase — but the apps consumers use to manage roadside assistance and other features.

Since Kashmir’s story was published, she says she’s been inundated with complaints by consumers about similar behavior. She’s even discovered that she’s one of the folks GM spied on and tattled to insurers about. In a follow up story, she recounts how she and her husband bought a Chevy Bolt, were auto-enrolled in a driver assistance program, then had their data (which they couldn’t access) sold to insurers.

GM’s now facing 10 different federal lawsuits from customers pissed off that they were surreptitiously tracked and then forced to pay significantly more for insurance:

“In 10 federal lawsuits filed in the last month, drivers from across the country say they did not knowingly sign up for Smart Driver but recently learned that G.M. had provided their driving data to LexisNexis. According to one of the complaints, a Florida owner of a 2019 Cadillac CTS-V who drove it around a racetrack for events saw his insurance premium nearly double, an increase of more than $5,000 per year.”

GM (and some apologists) will of course proclaim that this is only fair that reckless drivers pay more, but that’s generally not how it works. Pressured for unlimited quarterly returns, insurance companies will use absolutely anything they find in the data to justify rising rates.

[…]

Automakers — which have long had some of the worst privacy reputations in all of tech — are one of countless industries that lobbied relentlessly for decades to ensure Congress never passed a federal privacy law or regulated dodgy data brokers. And that the FTC — the over-burdened regulator tasked with privacy oversight — lacks the staff, resources, or legal authority to police the problem at any real scale.

The end result is just a parade of scandals. And if Hill were so inclined, she could write a similar story about every tech sector in America, given everything from your smart TV and electricity meter to refrigerator and kids’ toys now monitor your behavior and sell access to those insights to a wide range of dodgy data broker middlemen, all with nothing remotely close to ethics or competent oversight.

And despite the fact that this free for all environment is resulting in no limit of dangerous real-world harms, our Congress has been lobbied into gridlock by a cross-industry coalition of companies with near-unlimited budgets, all desperately hoping that their performative concerns about TikTok will distract everyone from the fact we live in a country too corrupt to pass a real privacy law.

Source: People Are Slowly Realizing Their Auto Insurance Rates Are Skyrocketing Because Their Car Is Covertly Spying On Them | Techdirt