Google deletes net-zero pledge from sustainability website

Google’s CEO Sundar Pichai stood smiling in a leafy-green California garden in September 2020 and declared that the IT behemoth was entering the “most ambitious decade yet” in its climate action.

“Today, I’m proud to announce that we intend to be the first major company to operate carbon free — 24 hours a day, seven days a week, 365 days a year,” he said, in a video announcement at the time.

Pichai added that he knew the “road ahead would not be easy,” but Google “aimed to prove that a carbon-free future is both possible and achievable fast enough to prevent the most dangerous impacts of climate change.”

Five years on, just how hard Google’s “energy journey” would become is clear. In June, Google’s Sustainability website proudly boasted a headline pledge to achieve net-zero emissions by 2030. By July, that had all changed.

An investigation by Canada’s National Observer has found that Google’s net-zero pledge has quietly been scrubbed, demoted from having its own section on the site to an entry in the appendices of the company’s sustainability report.

Genna Schnurbach, an external spokesperson for Google, referring to its Environment 2025 report, told us: “As you can see from the document, Google is still committed to their ambition of net-zero by 2030.”

By tracing back through the history of Google’s Sustainability website, however, we found that the company edited it in late June, removing almost all mention of its lauded net-zero goals. (A separate website referring to data centres specifically has maintained its existing language around net-zero commitments.)

Five years ago, Google’s climate action ambitions were the gold standard for Big Tech. Then, with power demand spikes from AI data centres, in July it scrubbed its sustainability website of its 2030 net zero pledge.

The page on Operating Sustainably has been rebranded to Operations, and the section on net-zero carbon was deleted. In its place is a new priority area: Energy.

[…]

Source: Google deletes net-zero pledge from sustainability website | Canada’s National Observer: Climate News

Google hit with $3.45 billion EU antitrust fine over adtech practices where US judge also found guilt but refused to punish

Alphabet’s Google was hit with a 2.95-billion-euro ($3.45 billion) European Union antitrust fine on Friday for anti-competitive practices in its lucrative adtech business, a sharp sanction that riled up U.S. President Donald Trump.

The fine, the fourth penalty Google has faced in its decade-long fight with EU competition regulators, follows bubbling trade tensions between major global powers and U.S. threats of retaliation over EU scrutiny of American tech firms.

Trump said in a post on Truth Social that the action was “unfair” and “discriminatory” and later told reporters he will take the matter up with the EU directly.

“We cannot let this happen to brilliant and unprecedented American Ingenuity and, if it does, I will be forced to start a Section 301 proceeding to nullify the unfair penalties being charged to these Taxpaying American Companies,” Trump said.

Section 301 of the Trade Act of 1974 allows the United States to penalize foreign countries that engage in acts that are “unjustifiable” or “unreasonable,” or burden U.S. commerce.

The European Commission’s action was triggered by a complaint from the European Publishers Council. Trump, who has hit Europe with trade tariffs, has threatened to retaliate against the EU for any pushback against Big Tech.

“I will be speaking to the European Union,” Trump told reporters at the White House on Friday.

While Google plans to appeal, the Commission has warned of stronger remedies – including potential divestitures – if the company fails to address its conflicts of interest. The case underscores growing transatlantic friction over digital market regulation and the EU’s push to rein in dominant platforms.

The EU competition enforcer had originally planned to hand out the fine on Monday but opposition from EU trade chief Maros Sefcovic on concerns about the impact on U.S. tariffs on European cars derailed EU antitrust chief Teresa Ribera’s plan.

The Commission said Google favoured its own online display technology services that reinforced its own ad exchange AdX’s central role in the adtech supply chain and allowed Google to charge high fees for its service, to the detriment of rivals and online publishers.

Google has abused its market power since 2014 until today, the EU watchdog said.

It ordered Google to stop the self-preferencing practices and take measures to cease its inherent conflicts of interest. The company has 60 days to inform the Commission how it plans to comply with this order, and another 30 days to do so.

The Commission reiterated its preliminary view that Google should divest part of its services but said it wants to first hear and assess Google’s compliance efforts, confirming a Reuters story last year.

[…]

Source: Google hit with $3.45 billion EU antitrust fine over adtech practices

It is good to see that at least the EU has the guts to do something about these monopolistic practices.

See also: EU Google antitrust penalty halted by low level commissioner amid Trump’s tariff threats

Judge who ruled Google is a monopoly says no need for punishment.

The worst possible antitrust outcome – unless you are Google

Scientists tap fresh water under the sea, raising hopes for a thirsty world

Deep in Earth’s past, an icy landscape became a seascape as the ice melted and the oceans rose off what is now the northeastern United States. Nearly 50 years ago, a U.S. government ship searching for minerals and hydrocarbons in the area drilled into the seafloor to see what it could find.

It found, of all things, drops to drink under the briny deeps — fresh water.

This summer, a first-of-its-kind global research expedition followed up on that surprise. Drilling for fresh water under the salt water off Cape Cod, Expedition 501 extracted thousands of samples from what is now thought to be a massive, hidden aquifer stretching from New Jersey as far north as Maine.

The sun sets behind the Liftboat Robert platform, home of Expedition 501, a global research expedition drilling for fresh water, in the North Atlantic, Saturday, July 19, 2025. (AP Photo/Carolyn Kaster)
The sun sets behind the Liftboat Robert platform, home of Expedition 501, a global research expedition drilling for fresh water, in the North Atlantic, Saturday, July 19, 2025. (AP Photo/Carolyn Kaster)

It’s just one of many depositories of “secret fresh water” known to exist in shallow salt waters around the world that might some day be tapped to slake the planet’s intensifying thirst, said Brandon Dugan, the expedition’s co-chief scientist

[…]

They’re out to solve the mystery of its origins — whether the water is from glaciers, connected groundwater systems on land or some combination.

The potential is enormous. So are the hurdles of getting the water out and puzzling over who owns it, who uses it and how to extract it without undue harm to nature.

[…]

Why try? In just five years, the U.N. says, the global demand for fresh water will exceed supplies by 40%. Rising sea levels from the warming climate are souring coastal freshwater sources while data centers that power AI and cloud computing are consuming water at an insatiable rate.

[…]

Source: Scientists tap fresh water under the sea, raising hopes for a thirsty world | AP News

Anthropic Agrees to $1.5 Billion Settlement for Downloading Pirated Books to Train AI

Anthropic has agreed to pay $1.5 billion to settle a lawsuit brought by authors and publishers over its use of millions of copyrighted books to train the models for its AI chatbot Claude, according to a legal filing posted online.

A federal judge found in June that Anthropic’s use of 7 million pirated books was protected under fair use but that holding the digital works in a “central library” violated copyright law. The judge ruled that executives at the company knew they were downloading pirated works, and a trial was scheduled for December.

The settlement, which was presented to a federal judge on Friday, still needs final approval but would pay $3,000 per book to hundreds of thousands of authors, according to the New York Times. The $1.5 billion settlement would be the largest payout in the history of U.S. copyright law, though the amount paid per work has often been higher. For example, in 2012, a woman in Minnesota paid about $9,000 per song downloaded, a figure brought down after she was initially ordered to pay over $60,000 per song.

In a statement to Gizmodo on Friday, Anthropic touted the earlier ruling from June that it was engaging in fair use by training models with millions of books.

“In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic’s approach to training AI models constitutes fair use,” Aparna Sridhar, deputy general counsel at Anthropic, said in a statement by email.

[…]

Source: Anthropic Agrees to $1.5 Billion Settlement for Downloading Pirated Books to Train AI

Just to be clear: using books to train AI was fine. Pirating the books, however, was not. Completely incredible that these guys pirated the books. With mistakes of this idiocy, I would not invest in Anthropic ever, at all.

BMW kills home assistant integration access to paid ConnectedDrive API to “protect security”

So you pay hundreds yearly for access to the Connected Drive API. You use Home Assistant to set the charging times of your BMW depending on when the price of electricity is low. BMW shuts you down (no re-imbursement, of course) and forces you to use one of their Charge Point providers. BMW then says it’s because of security.

I guess things are going very badly for BMW when they are making deals like this one as well as charging you subscriptions to use stuff in your car that you already paid for.

18 popular VPNs turn out to belong to 3 different owners – and contain insecurities as well

A new peer-reviewed study alleges that 18 of the 100 most-downloaded virtual private network (VPN) apps on the Google Play Store are secretly connected in three large families, despite claiming to be independent providers. The paper doesn’t indict any of our picks for the best VPN, but the services it investigates are popular, with 700 million collective downloads on Android alone.

The study, published in the journal of the Privacy Enhancing Technologies Symposium (PETS), doesn’t just find that the VPNs in question failed to disclose behind-the-scenes relationships, but also that their shared infrastructures contain serious security flaws. Well-known services like Turbo VPN, VPN Proxy Master and X-VPN were found to be vulnerable to attacks capable of exposing a user’s browsing activity and injecting corrupted data.

Titled “Hidden Links: Analyzing Secret Families of VPN apps,” the paper was inspired by an investigation by VPN Pro, which found that several VPN companies each were selling multiple apps without identifying the connections between them. This spurred the “Hidden Links” researchers to ask whether the relationships between secretly co-owned VPNs could be documented systematically.

[…]

Family A consists of Turbo VPN, Turbo VPN Lite, VPN Monster, VPN Proxy Master, VPN Proxy Master Lite, Snap VPN, Robot VPN and SuperNet VPN. These were found to be shared between three providers — Innovative Connecting, Lemon Clove and Autumn Breeze. All three have all been linked to Qihoo 360, a firm based in mainland China and identified as a “Chinese military company” by the US Department of Defense.

Family B consists of Global VPN, XY VPN, Super Z VPN, Touch VPN, VPN ProMaster, 3X VPN, VPN Inf and Melon VPN. These eight services, which are shared between five providers, all use the same IP addresses from the same hosting company.

Family C consists of X-VPN and Fast Potato VPN. Although these two apps each come from a different provider, the researchers found that both used very similar code and included the same custom VPN protocol.

If you’re a VPN user, this study should concern you for two reasons. The first problem is that companies entrusted with your private activities and personal data are not being honest about where they’re based, who owns them or who they might be sharing your sensitive information with. Even if their apps were all perfect, this would be a severe breach of trust.

But their apps are far from perfect, which is the second problem. All 18 VPNs across all three families use the Shadowsocks protocol with a hard-coded password, which makes them susceptible to takeover from both the server side (which can be used for malware attacks) and the client side (which can be used to eavesdrop on web activity).

[…]

 

Source: Researchers find alarming overlaps among 18 popular VPNs

Batshit crazy UK judge rules you can’t be fired for calling your bosses dickheads

Managers and supervisors brace yourselves: calling the boss a dickhead is not necessarily a sackable offence, a tribunal has ruled.

The ruling came in the case of an office manager who was sacked on the spot when – during a row – she called her manager and another director dickheads.

Kerrie Herbert has been awarded almost £30,000 in compensation and legal costs after an employment tribunal found she had been unfairly dismissed.

The employment judge Sonia Boyes ruled that the scaffolding and brickwork company she worked for had not “acted reasonably in all the circumstances in treating [her] conduct as a sufficient reason to dismiss her”.

“She made a one-off comment to her line manager about him and a director of the business,” Boyes said. “The comment was made during a heated meeting.

“Whilst her comment was not acceptable, there is no suggestion that she had made such comments previously. Further … this one-off comment did not amount to gross misconduct or misconduct so serious to justify summary dismissal.”

The hearing in Cambridge was told Herbert started her £40,000-a-year role at the Northampton firm Main Group Services in October 2018. The business was run by Thomas Swannell and his wife, Anna.

The tribunal heard that in May 2022 the office manager had found documents in her boss’s desk about the costs of employing her, and became upset as she believed he was going to let her go.

When Swannell then raised issues about her performance, she began crying, the hearing was told.

She told the tribunal that she said: “If it was anyone else in this position they would have walked years ago due to the goings-on in the office, but it is only because of you two dickheads that I stayed.”

She said Swannell retorted: “Don’t call me a fucking dickhead or my wife. That’s it, you’re sacked. Pack your kit and fuck off.”

[…]

Boyes found that Herbert was summarily fired because of her use of the word “dickheads” and ruled that the company had failed to follow proper disciplinary procedures.

She concluded that calling her bosses dickheads was not sufficient to fire Herbert and ordered the firm to pay £15,042.81 in compensation.

In her latest judgment she also ruled it had to pay £14,087 towards her legal fees.

Source: Calling boss a dickhead was not a sackable offence, tribunal rules | Employment tribunals | The Guardian

AI Slop Is Great For Internet (Re-)Decentralisation

In this article I take a look at AI Slop and how it is effecting the current internet. I also look at what exactly the internet of today looks like – it is hugely centralised. This centralisation creates a focused trashcan for the AI generated slop. This is exactly the opportunity that curated content creators need to shine and show relevant, researched, innovative and original content on smaller, decentralised content platforms.

What is AI Slop?

As GPTs swallow more and more data, it is increasingly used to make more “AI slop”. This is “low to mid quality content – “low- to mid-quality content – video, images, audio, text or a mix – created with AI tools, often with little regard for accuracy. It’s fast, easy and inexpensive to make this content. AI slop producers typically place it on social media to exploit the economics of attention on the internet, displacing higher-quality material that could be more helpful.” (Source: What is AI slop? A technologist explains this new and largely unwelcome form of online content).

Recent examples include Facebook content, Careless speech, especially in bought up abandoned news sites, Reddit posts, Fake leaked merchandise, Inaccurate Boring History videos, alongside the more damaging fake political images – well, you get the point I think.

A lot has been written about the damaging effects of AI slop, leading to reduced attention and congnitive fatigue, feelings of emptiness and detachment, commoditised homogeneous experiences, etc.

However, there may be a light point on the horizon. Bear with me for some background, though.

Centralisation of Content

It turns out that Netflix alone is responsible for 14.9% of global internet traffic. Youtube for 11.6%.

Infographic: Netflix is Responsible for 15% of Global Internet Traffic | Statista

Sandvine’s 2024 Global Internet Phenomena Report shows that 65% of all fixed internet traffic and 68% of all mobile traffic is driven through eight of the internet giants

Screenshot 2024-04-10 at 16.00.37

This concentration of the internet is not something new and has been studied for some time:

A decade ago, there was a much greater variety of domains within links posted by users of Reddit, with more than 20 different domains for every 100 random links users posted. Now there are only about five different domains for every 100 links posted.

In fact, between 60-70 percent of all attention on key social media platforms is focused towards just ten popular domains.

Beyond social media platforms, we also studied linkage patterns across the web, looking at almost 20 billion links over three years. These results reinforced the “rich are getting richer” online.

The authority, influence, and visibility of the top 1,000 global websites (as measured by network centrality or PageRank) is growing every month, at the expense of all other sites.

Source: The Same Handful of Websites Are Dominating The Web And That Could Be a Problem / Evolution of diversity and dominance of companies in online activity (2021)

The online economy’s lack of diversity can be seen most clearly in technology itself, where a power disparity has grown in the last decade, leaving the web in the control of fewer and fewer. Google Search makes up 92% of all web searches worldwide. Its browser, Chrome, which defaults to Google Search, is used by nearly two thirds of users worldwide.

Source: StatCounter Global Stats – Search Engine Market Share

Media investment analysis firm Ebiquity found that nearly half of all advertising spend is now digital, with Google, Meta (formerly Facebook) and Amazon single-handedly collecting nearly three quarters of digital advertising money globally in 2021.

Source: Grandstand platforms (2022)

And of course we know that news sites have been closing as advertisers flock to Social media sites, leading to a dearth of trustworthy journalism and ethical, rules bound journalism.

Centralisation of Underlying Technologies

And it’s not just the content we consume that has been centralised: The underlying technologies of the internet have been centralised as well. The Internet Society shows that data centres, DNS, top level domains, SSL Certificates, Content Delivery Networks and Web Hosting have been significantly centralised as well.

In some of these protocols there is more variation within regions:

We highlight regional patterns that paint a richer picture of provider dependence and insularity than we can through centralization alone. For instance, the Commonwealth of Independent States (CIS) countries (formed following the dissolution of Soviet Union) exhibit comparatively low centralization, but depend highly on Russian providers. These patterns suggest possible political, historical, and linguistic undercurrents of provider dependence. In addition, the regional patterns we observe between layers of website infrastructure enable us to hypothesize about forces of influence driving centralization across multiple layers. For example, many countries are more insular in their choice of TLD given the limited technical implications of TLD choice. On the other extreme, certificate authority (CA) centralization is far more extreme than other layers due to popular web browsers trusting only a handful of CAs, nearly all of which are located in the United States.

Source: On the Centralization and Regionalization of the Web (2024)

Why is this? A lot of it has to do with the content providers wanting to gather as much data as possible on their users as well as being able to offer a fast, seamless experience for their users (so that they stay engaged on their platforms):

The more information you have about people, the more information you can feed your machine-learning process to build detailed profiles about your users. Understanding your users means you can predict what they will like, what they will emotionally engage with, and what will make them act. The more you can engage users, the longer they will use your service, enabling you to gather more information about them. Knowing what makes your users act allows you to convert views into purchases, increasing the provider’s economic power.

The virtuous cycle is related to the network effect. The value of a network is exponentially related to the number of people connected to the network. The value of the network increases as more people connect, because the information held within the network increases as more people connect.

Who will extract the value of those data? Those located in the center of the network can gather the most information as the network increases in size. They are able to take the most advantage of the virtuous cycle. In other words, the virtuous cycle and the network effect favor a smaller number of complex services. The virtuous cycle and network effect drive centralization.

[…]

How do content providers, such as social media services, increase user engagement when impatience increases and attention spans decrease? One way is to make their service faster. While there are many ways to make a service faster, two are of particular interest here.

First, move content closer to the user. […] Second, optimize the network path.

[…]

Moving content to the edge and optimizing the network path requires lots of resources and expertise. Like most other things, the devices, physical cabling, buildings, and talent required to build large computer networks are less expensive at scale

[…]

Over time, as the Internet has grown, new regulations and ways of doing business have been added, and new applications have been added “over the top,” the complexity of Internet systems and protocols has increased. As with any other complex ecosystem, specialization has set in. Almost no one knows how “the whole thing works” any longer.

How does this drive centralization?

Each feature—or change at large—increases complexity. The more complex a protocol is, the more “care and feeding” it requires. As a matter of course, larger organizations are more capable of hiring, training, and keeping the specialized engineering talent required to build and maintain these kinds of complex systems.

Source: The Centralization of the Internet (2021)

So what does this have to do with AI Slop?

As more and more AI Slop is generated, debates are raging in many communities. Especially in the gaming and art communities, there is a lot of militant railing against AI art. In 2023 a study showed that people were worried about AI generated content, but unable to detect it:

research employed an online survey with 100 participants to collect quantitative data on their experiences and perceptions of AI-generated content. The findings indicate a range of trust levels in AI-generated content, with a general trend towards cautious acceptance. The results also reveal a gap between the participants’ perceived and actual abilities to distinguish between AI-generated content, underlining the need for improved media literacy and awareness initiatives. The thematic analysis of the respondent’s opinions on the ethical implications of AI-generated content underscored concerns about misinformation, bias, and a perceived lack of human essence.

Source: The state of AI: Exploring the perceptions, credibility, and trustworthiness of the users towards AI-Generated Content

However, politics has caught up and in the EU and US policy has arisen that force AI content generators to also support the creation of reliable detectors for the content they generate:

In this paper, we begin by highlighting an important new development: providers of AI content generators have new obligations to support the creation of reliable detectors for the content they generate. These new obligations arise mainly from the EU’s newly finalised AI Act, but they are enhanced by the US President’s recent Executive Order on AI, and by several considerations of self-interest. These new steps towards reliable detection mechanisms are by no means a panacea—but we argue they will usher in a new adversarial landscape, in which reliable methods for identifying AI-generated content are commonly available. In this landscape, many new questions arise for policymakers. Firstly, if reliable AI-content detection mechanisms are available, who should be required to use them? And how should they be used? We argue that new duties arise for media and Web search companies arise for media companies, and for Web search companies, in the deployment of AI-content detectors. Secondly, what broader regulation of the tech ecosystem will maximise the likelihood of reliable AI-content detectors? We argue for a range of new duties, relating to provenance-authentication protocols, open-source AI generators, and support for research and enforcement. Along the way, we consider how the production of AI-generated content relates to ‘free expression’, and discuss the important case of content that is generated jointly by humans and AIs.

Source: AI content detection in the emerging information ecosystem: new obligations for media and tech companies (2024)

This means that although people may or may not get better at spotting AI generated slop for what it is, work is being done on showing it up for us.

With the main content providers being inundated with AI trash and it being shown up for what it is, people will get bored of it. This gives other parties, those with the possibility of curating their content, possibilities for growth – offering high quality content that differentiates itself from other high quality content sites and especially from the central repositories of AI filled garbage. Existing parties and smaller new parties have an incentive to create and innovate. Of course that content will be used to fill the GPTs, but that should increases the accuracy of the GPTs that are paying attention (and who should be able to filter out AI slop better than any human could), who will hopefully redirect their answers to their sources – as legally explainability is becoming more and more relevant.

So together with the rise of anti Google sentiment and opportunities to DeGoogle leading to new (and de-shittified, working, and non-US!) search engines such as Qwant and SearXNG I see this as an excellent opportunity for the (relatively) little man to rise up again to diversify and decentralise the internet.

The worst possible antitrust outcome – unless you are Google

Last year, Google lost an antitrust case to Biden’s DoJ. The DoJ lawyers beat Google like a drum, proving beyond a shadow of a doubt that Google had deliberately sought to create and maintain a monopoly over search, and that they’d used that monopoly to make search materially worse, while locking competitors out of the market.

In other words, the company that controls 90% of search attained that control by illegal means, and, having thus illegitimately become the first port of call for the information-seeking world, had deliberately worsened its product to make more money:

https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan

That Google lost that case was a minor miracle. First, because for 40 years, the richest, most terrible people in the world have been running a literal re-education camp for judges where they get luxe rooms and fancy meals and lectures about how monopolies are good, actually:

https://pluralistic.net/2021/08/13/post-bork-era/#manne-down

But second, because Judge Amit Mehta decided that the Google case should be shrouded in mystery, suppressing the publication of key exhibits and banning phones, cameras and laptops from the courtroom, with the effect that virtually no one even noticed that the most important antitrust case in tech history, a genuine trial of the century, was underway:

https://www.promarket.org/2023/10/27/google-monopolizes-judicial-system-information-with-trial-secrecy/

This is really important. The government doesn’t have to win an antitrust trial in order to create competition. As the saying goes, “the process is the punishment.” Bill Gates was so personally humiliated by his catastrophic performance at his deposition for the Microsoft antitrust trial that he elected not to force-choke the nascent Google, lest he be put back in the deposition chair:

https://pluralistic.net/2020/09/12/whats-a-murder/#miros-tilde-1
a
But Judge Mehta turned his courtroom into a Star Chamber, a black hole whence no embarrassing information about Google’s wicked deeds could emerge. That meant that the only punishment Google would have to bear from this trial would come after the government won its case, when the judge decided on a punishment (the term of art is “remedy”) for Google.

Yesterday, he handed down that remedy and it is as bad as it could be. In fact, it is likely the worst possible remedy for this case:

https://gizmodo.com/google-wont-have-to-sell-chrome-browser-after-all-but-theres-a-catch-2000652304

Let’s start with what’s not in this remedy. Google will not be forced to sell off any of its divisions – not Chrome, not Android. Despite the fact that the judge found that Google’s vertical integration with the world’s dominant mobile operating system and browser were a key factor in its monopolization, Mehta decided to leave the Google octopus with all its limbs intact:

https://pluralistic.net/2024/11/19/breaking-up-is-hard-to-do/#shiny-and-chrome

Google won’t be forced to offer users a “choice screen” when they set up their Android accounts, to give browsers other than Chrome a fair shake:

https://pluralistic.net/2024/08/12/defaults-matter/#make-up-your-mind-already

Nor will Google be prevented from bribing competitors to stay out of the search market. One of the facts established in the verdict was that Google had been slipping Apple more than $20b/year in exchange for which, Apple forbore from making a competing search engine. This exposed every Safari and iOS user to Google surveillance, while insulating Google from the threat of an Apple competitor.

And then there’s Google’s data. Google is the world’s most prolific surveiller, and the company boasts to investors about the advantage that its 24/7 spying confers on it in the search market, because Google knows so much about us and can therefore tailor our results. Even if this is true – a big if – it’s nevertheless a fucking nightmare. Google has stolen every fact about our lives, in service to propping up a monopoly that lets it steal our money, too. Any remedy worth the name would have required Google to delete (“disgorge,” in law-speak) all that data:

https://pluralistic.net/2024/08/07/revealed-preferences/#extinguish-v-improve

Some people in the antitrust world didn’t see it that way. Out of a misguided kind of privacy nihilism, they called for Google to be forced to share the data it stole from us, so that potential competitors could tune their search tools on the monopolist’s population-scale privacy violations.

And that is what the court has ordered.

As punishment for being convinced of obtaining and maintaining a monopoly, Google will be forced to share sensitive data with lots of other search engines. This will not secure competition for search, but it will certainly democratize human rights violations at scale.

Doubtless there will be loopholes in this data-sharing order. Google will have the right to hold back some of its data (that is, our data) if it is deemed “sensitive.” This isn’t so much a loophole as is a loopchasm. I’ll bet you a testicle⹋ that Google will slap a “sensitive” label on any data that might be the least bit useful to its competitors.

⹋not one of mine

This means that even if you like data-sharing as a remedy, you won’t actually get the benefit you were hoping for. Instead, Google competitors will spend the next decade in court, fighting to get Google to comply with this order.

That’s the main reason that we force monopolists to break up after they lose antitrust cases. We could put a bunch of conditions on how they operate, but figuring out whether they’re adhering to those conditions and punishing them when they don’t is expensive, labor-intensive and time consuming. This data-sharing wheeze is easy to do malicious compliance for, and hard to enforce. It is not an “administrable” policy:

https://locusmag.com/2022/03/cory-doctorow-vertically-challenged/

This is all downside. If Google complies with the order, it will constitute a privacy breach on a scale never before seen. If they don’t comply with the order, it will starve competitors of the one tiny drop of hope that Judge Mehta squeezed out of his pen. It’s a catastrophe. An utter, total catastrophe. It has zero redeeming qualities. Hope you like enshittification, folks, because Judge Mehta just handed Google an eternal licence to enshittify the entire fucking internet.

It’s impossible to overstate how fucking terrible Mehta’s reasoning in this decision is. The Economic Liberties project calls it “judicial cowardice” and compared the ruling to “finding someone guilty for bank robbery and then sentencing him to write a thank you note”:

https://www.economicliberties.us/press-release/doj-states-must-appeal-judge-mehtas-act-of-judicial-cowardice-letting-google-keep-its-monopoly-power/

Matt Stoller says it’s typical of today’s “lawlessness, incoherence and deference to big business”:

https://www.thebignewsletter.com/p/a-judge-lets-google-get-away-with

David Dayen’s scorching analysis in The American Prospect calls it “embarassing”:

https://prospect.org/justice/2025-09-03-embarrassing-ruling-allows-google-search-monopoly/

Dayen points out the many ways in which Mehta ignored his own findings, ignored the Supreme Court. Mehta wrote:

This court, however, need not decide this issue, because there are independent reasons that remedies designed to eliminate the defendant’s monopoly—i.e., structural remedies—are inappropriate in this case.

Which, as Dayen points out is literally a federal judge deciding to ignore the law “because reasons.”

Dayen says that he doesn’t see why Google would even bother appealing this ruling: “since it won on almost every point.” But the DoJ could appeal. If MAGA’s promises about holding Big Tech to account mean anything at all, the DoJ would appeal.

I’ll bet you a testicle⹋ that the DoJ will not appeal. After all, Trump’s DoJ now has a cash register at the reception desk, and if you write a check for a million bucks to some random MAGA influencer, they can make all charges disappear:

https://pluralistic.net/2025/09/02/act-locally/#local-hero

⹋again, not one of mine

And if you’re waiting for Europe to jump in and act where America won’t, don’t hold your breath. EU Commission sources leaked to Reuters that the EU is going to drop its multi-billion euro fine against Google because they don’t want to make Trump angry:

https://www.reuters.com/legal/litigation/google-adtech-fine-hold-eu-awaits-lower-us-car-duties-sources-say-2025-09-02/

Sundar Pichai gave $1m to Donald Trump and got a seat on the dais at the inaguration. Trump just paid him back, 40,000 times over. Trump is a sadist, a facist, and a rapist – and he’s also a remarkably cheap date.

Source: Pluralistic: The worst possible antitrust outcome (03 Sep 2025) – Pluralistic: Daily links from Cory Doctorow

Top German court says maybe the Web should be more like television in order to protect copyright and intrusive business models

Back in 2022, Walled Culture wrote about a legal case involving ad blockers. These are hugely popular programs: according to recent statistics, around one billion people use ad blockers when they are online. That’s a testament to the importance many people attach to being in control of their browser experience, and to a wide dislike of the ads they are forced to view. The 2022 case concerned a long-running attempt by the German media publishing giant Axel Springer to sue Eyeo, the makers of the widely-used AdBlock Plus program. Springer was trying to force people to view the ads on its sites, whether they are wanted or not, and argued that ad blocking programs were illegal. Springer lost every one of its many court cases trying to establish this, but refused to give up on its quixotic quest. It appealed to the German Federal Supreme Court, which has unfortunately sent the case back to the lower court. As a post on the Mozilla blog explains:

The BGH (as the Federal Supreme Court is known) called for a new hearing so that the Hamburg court can provide more detail regarding which part of the website (such as bytecode or object code) is altered by ad blockers, whether this code is protected by copyright, and under what conditions the interference might be justified.

The full impact of this latest development is still unclear. The BGH will issue a more detailed written ruling explaining its decision. Meanwhile, the case has now returned to the lower court for additional fact-finding. It could be a couple more years until we have a clear answer.

Springer’s argument was that a Web page is actually a kind of program, and as such was protected by copyright. An ad blocker installed in a browser, Springer maintained, infringed on its copyright by modifying that Web page program without permission. This is a novel way of looking at browsers and the Web pages they display. For the last 35 years, Web pages have been regarded as an arrangement of raw data in the form of text, images, sounds etc. The Web browser is a specialised program for displaying that data in various formats, controlled by the user. Springer is asserting something far reaching: that a Web page is itself a program that must be run “as is”, and not modified by a Web browser and its add-ons without the explicit permission of the page’s copyright holder.

As the Mozilla blog post points out, if the German courts ultimately adopt this position, the implications would be profound, because this would affect not just ad blockers. There are many other reasons why people use tools like browser extensions to modify Web pages before they are displayed:

These include changes to improve accessibility, to evaluate accessibility, or to protect privacy. Indeed, the risks of browsing range from phishing, to malicious code execution, to invasive tracking, to fingerprinting, to more mundane harms like inefficient website elements that waste processing resources. Users should be equipped with browsers and browser extensions that give them both protection and choice in the face of these risks. A browser that inflexibly ran any code served to the user would be an extraordinarily dangerous piece of software. [Emphasis in original]

Springer’s argument is an attack on the very concept of what a Web browser does. The German publisher wants the browser and extensions to be under the Web page author’s control, with the browser user reduced to a passive viewer. It effectively turns the Web into a form of television, with Web page “broadcasts” that can’t be modified in any significant ways. Mozilla rightly warns:

Such a precedent could embolden legal challenges against other extensions that protect privacy, enhance accessibility, or improve security. Over time, this could deter innovation in these areas, pressure browser vendors to limit extension functionality, and shift the internet away from its open, user-driven nature toward one with reduced flexibility, innovation, and control for users.

In the wider context of copyright, there are two aspects worth noting. One is that Springer is using copyright not to protect creativity, but to enforce its business model – online advertising – after losing multiple court cases that it had brought based on competition law. The other point is that Springer’s argument is only possible because copyright was extended to computer programs some years ago. That was not an inevitable decision, since it could be argued that computer code lacks the human, expressive nature of texts, images or music. It’s true that different coders have different styles that may be visible in their output, but those differences are hardly on the same level as a Shakespeare sonnet, a self-portrait by Rembrandt, or a Beethoven string quartet. To afford them the same protection was a mistake, and a product of the copyright industry’s successful campaign to expand this powerful intellectual monopoly protection to more fields, however inappropriately.

In the present case it can be seen how dangerous this mindless maximalist approach is. If the lower German court accepts Springer’s argument, after it has carried out its fact finding, it would chill real Internet innovation for the sake of protecting a deeply-flawed and failing business model that has nothing to do with life-enhancing creativity, but is all about eliminating choice and agency. Although such a result would only apply in Germany, and would in any case be hard to enforce, the EU legal system and the global nature of the Web means it could have wider knock-on effects. Let’s hope it doesn’t come to that.

Source: Top German court says maybe the Web should be more like television in order to protect copyright – Walled Culture

ReMarkable Paper Pro Move review: e-ink notepad gets nice and small (7.8″)

Since I fell in love reviewing the ReMarkable 2 in 2020, I’ve had one wish for the Norwegian whizz kids behind this state-of-the-art e-ink tablet: Make one like this but smaller, please.

Why? Because while it’s nice to write on a super-slim, silver, LED-free “magic legal pad from the future,” as I still call it, there are times when the form factor of a legal pad feels like too much. Use one on a plane tray table, for example, and you might feel exposed to the prying eyes of seatmates. Then there’s the portability factor: An e-ink notebook/sketchbook you can just slip into your pocket like a smartphone, rather than tote it around in a laptop bag, seems like a no-brainer.

[…]

With the launch of the Paper Pro Move ($449 with regular Marker stylus, $499 with Marker Plus, available for order now on Remarkable.com), we have a 7.8-inch notebook screen that’s satisfyingly small and portable. Amazingly, ReMarkable has done this while retaining all the Paper Pro’s color e-ink functionality — and the aspect ratio of its pages.

I’ve been using the Move for two weeks, and I very much like what I’m seeing. Because here’s the ingenious part of the Move’s design: ReMarkable didn’t opt for the form factor of a regular old Kindle (or a medium Moleskine, to put it in paper notebook terms). Instead the company drew inspiration from something so obvious, this reporter has smacked his head that he didn’t think of it: the classic reporter’s notebook.

[…]

It’s not just that reporter’s notebooks are longer and thinner, all the better to take fast notes while on your feet at a press conference. It’s not just that a thinner device is easier to stuff in your pocket (some pockets, to be fair, are too small to fully contain the Move). It’s also what a longer, thinner design means in the context of ReMarkable world.

[…]

In portrait mode, the Paper Pro Move automatically fits the page to the screen. (It also pins the menu bar to the top of the page, which makes more sense than left or right.) If you go back and forth between portrait and landscape mode, you’ll probably be able to tell which mode any particular notes were written in; the words might look too small or too large in the other mode.

A woman in a park writes on a notebook-sized e-ink tablet held in one hand
Using the ReMarkable Paper Pro Move in the wild Credit: Chris Taylor / Mashable

Personally, I’ve really enjoyed writing in tight, tiny lines in portrait mode, as if I’m trying to save paper, and quite enjoy how that looks in regular (landscape orientation) size. But your writing mileage may vary. And if you’re using ReMarkable’s highly effective handwriting-to-text conversion feature, the size of your scrawl may not matter at all.

Your battery mileage will vary too In my enthusiastic testing, the battery life came nowhere close to ReMarkable’s claim that it can last a full two weeks. To be fair, this is going to depend largely on how much you use the e-ink backlight (which also seems improved, and more evenly distributed around the screen, than in the Paper Pro). If you’re not going to use the backlight at all, two weeks of battery life seems a reasonable expectation.

ReMarkable Paper Pro Move comes with caveats

The form factor of a reporter’s notebook isn’t great for everything you can do on an e-ink screen. Many PDFs and EPUB files will look a tad too small in Portrait Mode, so you either have to flip the screen and scroll a lot, or mess around with pinch and zoom. That, unfortunately, is not helped by the one thing that still feels buggy about e-ink screens: if you’re moving through or around pages too fast, they can’t always keep up. A slow refresh rate can have you scrolling through pages faster than you intend.

If you’re used to LED-screen smartphones rather than Kindles, say, this may be an exercise in frustration. Also frustrating is the color refresh problem that carries over from the Paper Pro: Any color you use that isn’t black has to flash on and off. But if you’re new to ReMarkable world, and to writing with e-ink, you’re going to be pleasantly surprised at how fast and natural writing itself (in regular black on white) feels.

You’ll have to decide whether to go naked without the Folio covers, which cost extra, or spend up to $100 more to protect your screen from whatever scratch-creating objects might be in your pocket or bag.

[…]

There’s one final caveat on cost. if you want more than your 50 most recent documents to sync to other devices (including the ReMarkable desktop, iOS and web app readers), you’ll need the ReMarkable Connect service. This is free for the first 100 days, and costs $2.99 a month or $29 a year thereafter.

Conclusion: This notebook is magic

Ultimately, the proof is in the writing. And I have been writing, in more places than ever: On planes, on trains, in automobiles (I don’t recommend the latter if you get carsick easily, but the desire was there). I’ve written in bed while disturbing my partner less. I’ve pulled it out of my pocket in waiting rooms; I’ve jotted notes on it while friends I was having coffee with were busy typing “just one quick email” on their smartphones.

The best notebook or writing tablet, to paraphrase a common saying about cameras, is the one you have with you. And the ReMarkable Paper Pro Move is a notebook you’re going to want to have with you, for the sharpness of the result as well as the portability factor. If you’ve got room in your pockets for a second gadget to tote everywhere like you tote your smartphone, and if you’re prepared to leave your wallet a little roomier, then this may be the Move.

Source: ReMarkable Paper Pro Move review: e-ink gets nice and small | Mashable

Now let’s hope it’s a little more durable than the ReMarkable 2, which busted the USB port, the power button and cracked the screen when I hauled it around for 2x 2 weeks holiday.

Judge who ruled Google is a monopoly says no need for punishment.

So the judge says that because things changed  in the search space (AI / GPT searching) that changes the advertising space (which the GPTs don’t really do much of – yet) which is what the case was about. The anti-competitive facts of the case were before GPTs came along and are not relevant to the current GPTs but all of that somehow doesn’t matter so Google doesn’t really have to change much.

Champagne will be flowing at Google HQ after US District Judge Amit Mehta decided to do very little to rein in the monopolistic web giant.

In his 230-page ruling Mehta, who last August ruled that Google broke US competition law, decided the search behemoth will not have to divest its Chrome browser or Android operating systems, and can continue to pay billions to the likes of Apple to secure a prominent place for its search engine.

“Google will not be required to divest Chrome; nor will the court include a contingent divestiture of the Android operating system in the final judgment,” he ruled. “Plaintiffs overreached in seeking forced divestiture of these key assets, which Google did not use to effect any illegal restraints.”

That decision will disappoint the US Department of Justice, because Mehta rejected the remedies it called for.

The only government proposal Mehta accepted was that Google must share access to user-side data, albeit only to “qualified competitors.” While this includes things like a search index and user-interaction data, it doesn’t have to hand over specific advertising data.

“If you think of ingredients as data, like users’ search index, recipes are what they do with that data and how they use that data to make search results more relevant,” Adam Kovacevich, CEO of technology non-profit Chamber of Progress and a former Googler, told The Register.

“What you had is Google’s rivals arguing that Google had to share its recipes’ secret sauce. And the judge rejected that. He said: ‘You only have to share their ingredient list, effectively their search and search index.'”

The ruling also includes a requirement for Google to stop entering into exclusive deals that make the search giant the default search engine on mobile devices. It also requires Google to submit to six years of regulatory oversight by a technical committee that will monitor it to ensure it’s not backsliding.

You don’t find someone guilty of robbing a bank and then sentence him to writing a thank you note for the loot

The DoJ is likely to appeal but had no comment at the time of publication. However, the ruling has infuriated antitrust groups.

“You don’t find someone guilty of robbing a bank and then sentence him to writing a thank you note for the loot,” said Nidhi Hegde, executive director of the non-profit American Economic Liberties Project.

“Similarly, you don’t find Google liable for monopolization and then write a remedy that lets it protect its monopoly. This feckless remedy to the most storied case of monopolization of the past quarter century is a complete failure of his duty and must be appealed.”

Yet another thing AI has ruined

So what was it that caused the judge – who said barely a year ago that the ad slinger was an “overbearing illegal monopoly” – to do so little to change the status quo?

Mehta found that AI has changed the competitive landscape Google faces since the DoJ first brought its case in October 2020.

“The emergence of GenAI changed the course of this case,” he wrote. “No witness at the liability trial testified that GenAI products posed a near-term threat to general search engines (GSE).

“The very first witness at the remedies hearing, by contrast, placed GenAI front and center as a nascent competitive threat. These remedies proceedings thus have been as much about promoting competition among GSEs as ensuring that Google’s dominance in search does not carry over into the GenAI space.”

Mehta argued that over the past year he has sought out multiple sources of testimony to discuss AI and the issues that surround it, and is therefore cognizant of the issues it creates. But the original case was about Google’s existing advertising practices. The judge claims he addressed that matter.

Google clearly agrees with Mehta when it comes to AI changing the antitrust situation. In a statement, it welcomed the ruling and said it will continue to dispute his initial finding that it is an illegal monopoly.

“Today’s decision recognizes how much the industry has changed through the advent of AI, which is giving people so many more ways to find information,” Google said in a canned statement. “This underlines what we’ve been saying since this case was filed in 2020: Competition is intense and people can easily choose the services they want.”

Google and Mehta do have a point. The Chamber of Progress’s Kovacevich – who attended many of the hearings – pointed out that when the case was heard generative AI was very new, and the AI search market was still in its infancy. In the nearly five years since, much has changed.

“Anybody who has been paying attention to technology in the last two years would say that generative AI does pose a competitive challenge to traditional search engines,” he opined.

Anybody who has been paying attention to technology in the last two years would say that generative AI does pose a competitive challenge to traditional search engines

“So I think what the judge was grappling with was this reality that it changes the game, and it changed the game since Google was found liable in the first phase of the trial. So I thought it was great that he was acknowledging that, and spent so many pages [of the ruling] just talking about how much that poses a competitive challenge to traditional search engines.”

And the billions will keep flowing

Google’s stock price shot up by eight percent in after-hours trading and Apple’s jumped 2.5 percent, suggesting investors like this ruling.

That sentiment may stem from the fact that during the trial it emerged that in 2021 Google paid more than $26 billion to other companies to make sure that it was the default search engine on their platforms. Apple raked in $18-20 billion in 2020 alone, around a quarter of its profit in that year [PDF]. Google wouldn’t spend that sort of money unless it paid off, so its shareholders may be pleased that a big source of revenue remains viable.

Mozilla is another beneficiary of Google’s largesse. While the amount it gets is trivial in comparison to Cook & Co, thought to be around $400 million, the foundation has very few other sources of revenue. Earlier this year Mozilla’s CFO warned that cutting the Google subsidy would “potentially start a downward spiral of usage as people defected from our browser, which … could at the end of the day put Firefox out of business,” the judge notes.

At the time of publication, Apple and Mozilla had no comment.

Mehta noted that the loss of such payments would be “crippling,” and “downstream harms to distribution partners, related markets, and consumers, which counsels against a broad payment ban.”

So what will change for consumers? In effect, almost nothing. Google will carry on as before, and the case will drag on for years.

“Users will be in much the same position as before,” Mitch Stoltz, litigation director for the EFF told The Register.

“The lack of any restructuring of Google, or even a ban on the massive revenue sharing payments to Apple and others for default search placement that were at the heart of the government’s case, mean that Google’s incentives won’t change, and the data-sharing remedies may be undermined.” ®

Source: Judge who ruled Google is a monopoly orders modest remedies • The Register

 

Switzerland launches its own open-source AI model

There’s a new player in the AI race, and it’s a whole country. Switzerland has just released Apertus, its open-source national Large Language Model (LLM) that it hopes would be an alternative to models offered by companies like OpenAI. Apertus, Latin for the world “open,” was developed by the Swiss Federal Technology Institute of Lausanne (EPFL), ETH Zurich and the Swiss National Supercomputing Centre (CSCS), all of which are public institutions.

“Currently, Apertus is the leading public AI model: a model built by public institutions, for the public interest. It is our best proof yet that AI can be a form of public infrastructure like highways, water, or electricity,” said Joshua Tan, a leading proponent in making AI a public infrastructure.

The Swiss institutions designed Apertus to be completely open, allowing users to inspect any part of its training process. In addition to the model itself, they released comprehensive documentation and source code of its training process, as well as the datasets they used. They built Apertus to comply with Swiss data protection and copyright laws, which makes it perhaps one of the better choices for companies that want to adhere to European regulations. The Swiss Bankers Association previously said that a homegrown LLM would have “great long-term potential,” since it will be able to better comply with Switzerland’s strict local data protection and bank secrecy rules. At the moment, Swiss banks are already using other AI models for their needs, so it remains to be seen whether they’ll switch to Apertus.

Anybody can use the new model: Researchers, hobbyists and even companies are welcome to build upon it and to tailor it for their needs. They can use it to create chatbots, translators and even educational or training tools, for instance. Apertus was trained on 15 trillion tokens across more than 1,000 languages, with 40 percent of the data in languages other than English, including Swiss German and Romansh. Switzerland’s announcement says the model was only trained on publicly available data, and its crawlers respected machine-readable opt-out requests when they came across them on websites. To note, AI companies like Perplexity have previously been accused of scraping websites and bypassing protocols meant to block their crawlers. Some AI companies have also been sued by news organizations and creatives for using their content to train their models without permission.

Apertus is currently available in two sizes with 8 billion and 70 billion parameters. It’s currently available via Swisscom, a Swiss information and communication technology company, or via Hugging Face.

 

https://www.swiss-ai.org/apertus

Source: Switzerland launches its own open-source AI model

 

 

Study finds cannabis improves sleep where other drugs fail

Insomnia patients taking cannabis-based medical products reported better quality sleep after up to 18 months of treatment, according to a study published August 27 in the open-access journal PLOS Mental Health by Arushika Aggarwal from Imperial College London, U.K., and colleagues.

About one out of every three people has some trouble getting a good night’s rest, and 10 percent of adults meet the criteria for an insomnia disorder. But current treatments can be difficult to obtain, and the drugs approved for insomnia run the risk of dependence. To understand how cannabis-based medical products might affect insomnia symptoms, the authors of this study analyzed a set of 124 insomnia patients taking medical cannabis products. They examined the patient’s reports of their sleep quality, anxiety/depression, and quality of life changes between one and 18 months of treatment.

The patients reported improved sleep quality that lasted over the 18 months of treatment. They also showed significant improvements in anxiety/depression as well as reporting less pain. About nine percent of the patients reported adverse effects such as fatigue, insomnia, or dry mouth, but none of the side effects were life-threatening. While randomized controlled trials will be needed to prove that the products are safe and effective, the authors suggest that cannabis-based medical products could improve sleep quality in insomnia patients.

[…]

He adds: “Conducting this long-term study provided valuable real-world evidence on patient outcomes that go beyond what we typically see in short-term trials. It was particularly interesting to observe signs of potential tolerance over time, which highlights the importance of continued monitoring and individualized treatment plans.”

Journal Reference:

  1. Arushika Aggarwal, Simon Erridge, Isaac Cowley, Lilia Evans, Madhur Varadpande, Evonne Clarke, Katy McLachlan, Ross Coomber, James J. Rucker, Mark W. Weatherall, Mikael H. Sodergren. UK Medical Cannabis Registry: A clinical outcomes analysis for insomnia. PLOS Mental Health, 2025; 2 (8): e0000390 DOI: 10.1371/journal.pmen.0000390

Cannabis-based medicinal products

Details of cannabis-based medicinal product treatment at baseline and the maximum titrated dose were available for all participants (n = 124) (Table 4). Administration routes were also available at baseline (n = 124), follow-up months 1, 3, 6, and 12 (n = 123) and 18-months (n = 124). The median daily CBD dose at baseline was 1.00 [0.00-20.00] mg/day and increased to 10.00 [0.00-25.00] mg/day by month 3, and this was sustained until 18-month follow-up (10.00 [5.00-35.75] mg/day). For THC, the median daily dose was 20.00 [2.00-20.00] mg/day at baseline, and by 18-month follow-up, increased to 120.00 [95.00-210.38] mg/day. The most prescribed regimen at baseline (n = 51; 41.13%) and throughout every follow-up month until month 18 (n = 54; 43.55%) was dried flower only.

thumbnail

Download:

  • PPT
    PowerPoint slide
  • PNG
    larger image
  • TIFF
    original image
Table 4. Data on prescribed cannabis-based medicinal products recorded for participants (n = 124).

https://doi.org/10.1371/journal.pmen.0000390.t004

Source: Study finds cannabis improves sleep where other drugs fail | ScienceDaily

Stolen Salesforce Drift OAuth tokens expose Palo Alto customer data

Palo Alto Networks is writing to customers that may have had commercially sensitive data exposed after criminals used stolen OAuth credentials lifted from the Salesloft Drift break-in to gain entry to its Salesforce instance.

Marc Benoit, chief information security officer at PAN, confirmed in a note to clients – seen by The Register – that it was informed on August 25 that the “compromise of a third-party application, Salesloft’s Drift, resulted in the access and exfiltration of data stored in our Salesforce environment.”

It immediately disconnected the third-party application from its Salesforce CRM, he said. “The investigation [by the Unit 42 team] confirms that the event was isolated to our Salesforce environment and did not affect any Palo Alto Networks products, systems or services.”

Benoit said it “further confirmed that the data involved includes primarily customer business contact information, such as names and contact info, company attributes, and basic customer support case information. It is important to note that no tech support files or attachments to any customer support cases were part of the exfiltration.”

[…]

The breach of the Drift application has led to supply chain attacks at “hundreds” of organizations, including PAN, said Benoit in a blog post. He said the “incident” was “isolated to our CRM platform.”

Google said last week that it didn’t have enough signs to confirm that the recent spate of Salesforce data thefts claimed by ShinyHunters on Google itself, Workday, Allianz, Quantas and LVMH brand Dior were connected to the same group that masterminded the Salesloft attack.

The Unit 42 team at PAN advised organizations to monitor Salesforce and Salesloft updates, and take steps such as token revocation to secure platforms. It recommends conducting a review of all Drift integrations and all authentication activity with third-party systems for evidence of “suspicious connections, credential harvesting and data exfiltration.”

Unit 42 also recommends that you probe your Salesforce log-in history, audit trail, and API access logs from August 8 – when Salesloft says attackers first used “OAuth credentials to exfiltrate data from our customers’ Salesforce instances” – to the present day. It also advises combing over Identity Provider Logs and Network Logs. ®

Source: Stolen OAuth tokens expose Palo Alto customer data • The Register

So Spotify Public Links Now Show Your Personal Information. You Need to Disable Spotify DMs To Get Rid Of It.

Spotify wants to be yet another messaging platform, but its new DM system has a quirk that makes me hesitant to recommend it. Spotify used to be a non-identity based platform, but things changed once it added messaging. Now, the Spotify DM system is attaching account information to song links and putting it in front of users’ eyes. That means it can accidentally leak the name and profile picture of whoever shared a link, even if they didn’t intend to give out their account information, too. Thankfully there’s a way to make links more private, and to disable Spotify DMs altogether.

How Spotify is accidentally leaking users’ information

It all starts with tracking URLs. Many major companies on the web use these. They embed information at the end of a URL to track where clicks on it came from. Which website, which page, or in Spotify’s case, which user. If you’ve generated a Share link for a song or playlist in the past, it contained your user identity string at the end. And when someone accessed and acted on that link, by adding the song or playing it, your account information was saved in their account’s identity as a connection of sorts. Maybe a little invasive, but because users couldn’t do much with that information, it was mostly just a way for Spotify to track how often people were sharing music between each other.

Before, this happened in the background and no one really cared. But with the new Spotify DM feature, connections made via tracking links are suddenly being put front and center right before users’ eyes. As spotted by Reddit user u/sporoni122, these connections are now showing up in a “Suggested” section when using Spotify DMs, even if you just happened to click on a public link once and never heard of the person who shared it. Alternatively, you might have shared a link in the past, and could be shown account information for people who clicked on it.

Even if an account is public, I could see how this would be annoying. Imagine you share a song in a Discord server where you go by an anonymous name, but someone clicks on it and finds your Spotify account, where you might go by your real name. Bam, they suddenly know who you are.

Reddit user u/Reeceeboii added that Spotify is using this URL tracking behavior to populate a list of songs and playlists shared between two users even if they happened via third-party messaging services like WhatsApp.

So, if you don’t want others to find your Spotify account through your shared songs, what do you do? Well, before posting in anonymous communities like Discord or X, try cleaning up your links first.

My colleagues and I have previously written about how you can remove tracking information from a URL automatically on iPhone, how you can use a Mac app to clean links without any effort, or how you can use an all-in one extension to get the job done regardless of platform. You can also use a website like Link Cleaner to clean up your links.

Or you can take the manual approach. In your Spotify link, remove everything at the end starting with the question mark.

What do you think so far?

So this tracked link:

https://open.spotify.com/playlist/74BUi79BzFKW7IVJBShrFD?si=28575ba800324

Becomes this clean link:

https://open.spotify.com/playlist/74BUi79BzFKW7IVJBShrFD

Here, the part with “si=“ is your identifier. Of course, if it’s a playlist you’re sharing, it will still show your name and your profile picture—that’s how the platform has always worked. So if you want to stay truly anonymous, you’ll want to keep your playlists private.

How to disable Spotify DMs

If you don’t see yourself using Spotify DMs, it might also be a good idea to just get rid of them entirely. You’ll probably still want to remove tracking information from your URLs before sharing, just for due diligence. But if you don’t want to worry about getting DMs on Spotify or having your account show up as a Suggested contact to strangers, you should also go to Settings > Privacy and social > Social features and disable Messages. That’ll opt you out of the DM feature altogether.

Disable Spotify DM.
Credit: Michelle Ehrhardt

Source: If You’ve Ever Shared a Spotify Link Publicly, You Need to Disable Spotify DMs

Phonenstien Flips Broken Samsung Into QWERTY Slider but won’t share how

The phone ecosystem these days is horribly boring compared to the innovation of a couple decades back. Your options include flat rectangles, and flat rectangles that fold in half and then break. [Marcin Plaza] wanted to think outside the slab, without reinventing the wheel. In an inspired bout of hacking, he flipped a broken Samsung zFlip 5 into a “new” phone.

There’s really nothing new in it; the guts all come from the donor phone. That screen? It’s the front screen that was on the top half of the zFlip, as you might have guessed from the cameras. Normally that screen is only used for notifications, but with the Samsung’s fancy folding OLED dead as Disco that needed to change. Luckily for [Marcin] Samsung has an app called Good Lock that already takes care of that. A little digging about in the menus is all it takes to get a launcher and apps on the small screen.

Because this is a modern phone, the whole thing is glued together, but that’s not important since [Marcin] is only keeping the screen and internals from the Samsung. The new case with its chunky four-bar linkage is a custom design fabbed out in CNC’d aluminum. (After a number of 3D Printed prototypes, of course. Rapid prototyping FTW!)

The bottom half of the slider contains a Blackberry Q10 keyboard, along with a battery and Magsafe connector. The Q10 keyboard is connected to a custom flex PCB with an Arduino Micro Pro that is moonlighting as a Human Input Device. Sure, that means the phone’s USB port is used by the keyboard, but this unit has wireless charging,so that’s not a great sacrifice. We particularly like the use of magnets to create a satisfying “snap” when the slider opens and closes.

Unfortunately, as much as we might love this concept, [Marcin] doesn’t feel the design is solid enough to share the files. While that’s disappointing, we can certainly relate to his desire to change it up in an era of endless flat rectangles.  This project is a lot more work than just turning a broken phone into a server, but it also seems like a lot more fun.

 

Source: Phonenstien Flips Broken Samsung Into QWERTY Slider | Hackaday

EU Google antitrust penalty halted by low level commissioner amid Trump’s tariff threats

Source: EU Google antitrust penalty halted amid Trump’s tariff threats – POLITICO

Age verification legislation is tanking traffic to sites that comply, and rewarding those that don’t

A new report suggests that the UK’s age verification measures may be having unforeseen knock-on effects on web traffic, with the real winners being sites that flout the law entirely.

[…]

Sure, there are ways around this if you’d rather not feed your personal data to a platform’s third-party age verification vendor. However, sites are seeing more significant consequences beyond just locking you out of your DMs. For a start, The Washington post reports web traffic to pornography sites implementing age verification has taken a totally predictable hit—but those flouting the new age check requirements have seen traffic as much as triple compared to the same time last year.

The Washington Post looked at the 90 most visited porn sites based on UK visitor data from Similarweb. Of the 90 total sites, 14 hadn’t yet deployed ‘scan your face’ age checks. The publication found that while traffic from British IP addresses to sites requiring age verification had cratered, the 14 sites without age checks “have been rewarded with a flood of traffic” from UK-based users.

It’s worth noting that VPN usage might distort the the location data of users. Still, such a surge of traffic likely brings with it a surge in income in the form of ad-revenue. Ofcom, the UK’s government-approved regulatory communications office overseeing everything from TV to the internet, may have something to say about that though. Meanwhile, sites that comply with the rules are not only losing out on ad-revenue, but are also expected to pay for the legally required age verification services on top.

[…]

Alright, stop snickering about the mental image of someone perusing porn sites professionally, and let me tell you why this is important. You may have already read that while a lot of Brits support the age verification measures broadly speaking, a sizable portion feels they’ve been implemented poorly. Indeed, a lot of the aforementioned sites that complied with the law also criticised it by linking to a petition seeking its repeal. The UK government has responded to this petition by saying it has “no plans to repeal the Online Safety Act” despite, at time of writing, over 500,000 signatures urging it to do just that.

[…]

Source: Age verification legislation is tanking traffic to sites that comply, and rewarding those that don’t | PC Gamer

Of course age verification isn’t just hitting porn sites. It is also hitting LGBTQ+ sites, public health forums, conflict reporting and global journalism and more.

And there is no way to do Age Verification privately.

Europol wants to keep all data forever for law  enforcement, says unnamed(!) official. E.U. Court of Human Rights backed encryption as basic to privacy rights in 2024 and now Big Brother Chat Control is on the agenda again (EU consultation feedback link at end)

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

In the Russian case, the users relied on Telegram’s optional “secret chat” functions, which are also end-to-end encrypted. Telegram had refused to break into chats of a handful of users, telling a Moscow court that it would have to install a back door that would work against everyone. It lost in Russian courts but did not comply, leaving it subject to a ban that has yet to be enforced.
The European court backed the Russian users, finding that law enforcement having such blanket access “impairs the very essence of the right to respect for private life” and therefore would violate Article 8 of the European Convention, which enshrines the right to privacy except when it conflicts with laws established “in the interests of national security, public safety or the economic well-being of the country.”
The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”
In addition to prior cases, the judges cited work by the U.N. human rights commissioner, who came out strongly against encryption bans in 2022, saying that “the impact of most encryption restrictions on the right to privacy and associated rights are disproportionate, often affecting not only the targeted individuals but the general population.”
High Commissioner Volker Türk said he welcomed the ruling, which he promoted during a recent visit to tech companies in Silicon Valley. Türk told The Washington Post that“encryption is a key enabler of privacy and security online and is essential for safeguarding rights, including the rights to freedom of opinion and expression, freedom of association and peaceful assembly, security, health and nondiscrimination.”
[…]
Even as the fight over encryption continues in Europe, police officials there have talked about overriding end-to-end encryption to collect evidence of crimes other than child sexual abuse — or any crime at all, according to an investigative report by the Balkan Investigative Reporting Network, a consortium of journalists in Southern and Eastern Europe.
“All data is useful and should be passed on to law enforcement, there should be no filtering … because even an innocent image might contain information that could at some point be useful to law enforcement,” an unnamed Europol police official said in 2022 meeting minutes released under a freedom of information request by the consortium.

Source: E.U. Court of Human Rights backs encryption as basic to privacy rights – The Washington Post

An ‘unnamed’ Europol police official is peak irony in this context.

Remember to leave your feedback where you can, in this case: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14680-Impact-assessment-on-retention-of-data-by-service-providers-for-criminal-proceedings-/public-consultation_en

MS Azure mistake erroneously hikes costs 3x during internal migration, then tries to delete evidence from customer support portal

An alarmed Register reader got in touch after receiving warnings from Azure’s automated systems that they had significantly exceeded their budgets, and a glance at Microsoft’s support forums indicates their issue was not isolated.

The problem was that costs had suddenly ramped up. One user, with a budget threshold of £63 ($85), received an automated alert indicating that their spend was forecast to reach £758.71 ($1,027). Another said: “We’re actively seeing the same issue, costs have blown up by a crazy amount. No official notice or announcement from Microsoft either, it’s appalling.”

Suggestions from Microsoft that users should contact the support team did little to assuage concerns. A user (their caps) said: “AND I CANNOT CONTACT THE SUPPORT ANYHOW… Just automated ‘do this, do that’.”

According to messages seen by The Register, troubles appear to have stemmed from accounts being migrated from the Microsoft Online Subscription Program (MOSP) to the Microsoft Customer Agreement (MCA). The transition triggered incorrect cost calculations and, in some cases, resulted in retroactive charges affecting multiple customers.

Microsoft’s engineering team swung into action amid the cries of alarm, and a spokesperson told us: “We have addressed the underlying issue and impacted customers should now see the correct values in their portal.”

The Register understands that invoices and billing shouldn’t have been affected. However, that is likely of little comfort to administrators sent into a panic by an official alert from Microsoft warning that cloud forecasts were much higher than usual. We’d recommend keeping an eye on the portal and submitting a support request if the figures have gone awry.

One user reported that their comments in the support forum were being deleted. While Microsoft has a lengthy Code of Conduct, it wasn’t clear precisely what was causing comments to vanish. The user suggested that perhaps it was related to the words “customer” and “care.”

Source: Microsoft cloud customers hit by messed-up migration • The Register

Pluralistic: Darth Android – Altering Terms After the Fact



An Android robot standing atop a cracked mobile phone, wearing Darth Vader armor.

William Gibson famously said that “Cyberpunk was a warning, not a suggestion.” But for every tech leader fantasizing about lobotomizing their enemies with Black Ice, there are ten who wish they could be Darth Vader, force-choking you while grating out, “I’m altering the deal. Pray I don’t alter it any further.”

I call this business philosophy the “Darth Vader MBA.” The fact that tech products are permanently tethered to their manufacturers – by cloud connections backstopped by IP restrictions that stop you from disabling them – means that your devices can have features removed or altered on a corporate whim, and it’s literally a felony for you to restore the functionality you’ve had removed:

https://pluralistic.net/2023/10/26/hit-with-a-brick/#graceful-failure

That presents an irresistible temptation to tech bosses. It means that you can spy on your users, figure out which features they rely on most heavily, disable those features, and then charge money to restore them:

https://restofworld.org/2021/loans-that-hijack-your-phone-are-coming-to-india/

It means that you can decide to stop paying a supplier the license fee for a critical feature that your customers rely on, take that feature away, and stick your customers with a monthly charge, forever, to go on using the product they already paid for:

https://pluralistic.net/2022/10/28/fade-to-black/#trust-the-process

It means that you can push “security updates” to devices in the field that take away your customers’ ability to use third-party apps, so they’re forced to use your shitty, expensive apps:

https://www.404media.co/developer-unlocks-newly-enshittified-echelon-exercise-bikes-but-cant-legally-release-his-software/

Or you can take away third-party app support and force your customers to use your shitty app that’s crammed full of ads, so they have to look at an ad every time they want to open their garage-doors:

https://pluralistic.net/2023/11/09/lead-me-not-into-temptation/#chamberlain

Or you can break compatibility with generic consumables, like ink, and force your customers to buy the consumables you sell, at (literal) ten billion percent markups:

https://www.eff.org/deeplinks/2020/11/ink-stained-wretches-battle-soul-digital-freedom-taking-place-inside-your-printer

Combine the “agreements” we must click through after we hand over our money, wherein we “consent” to having the terms altered at any time, in any way, forever, and surrender our right to sue:

https://pluralistic.net/2025/08/15/dogs-breakfast/#by-clicking-this-you-agree-on-behalf-of-your-employer-to-release-me-from-all-obligations-and-waivers-arising-from-any-and-all-NON-NEGOTIATED-agreements

With the fact that billions of digital tools can be neutered at a distance with a single mouse-click:

https://pluralistic.net/2023/02/19/twiddler/

With the fact that IP law makes it a literal felony to undo these changes or add legal features to your own property that the manufacturer doesn’t want you to have:

https://pluralistic.net/2024/05/24/record-scratch/#autoenshittification

And you’ve created the conditions for a perfect Darth Vader MBA dystopia.

Tech bosses are fundamentally at war with the idea that our digital devices contain “general purpose computers.” The general-purposeness of computers – the fact that they are all Turing-complete, universal von Neumann machines – has created tech bosses’ fortunes, but now that these fortunes have been attained, the tech sector would like to abolish that general-purposeness; specifically, they would like to make it impossible to run programs that erode their profits or frustrate their attempts at rent-seeking.

This has been a growing trend in computing since the mid-2000s, when tech bosses realized that the “digital rights management” that the entertainment industry had fallen in love with could provide even bigger dividends for tech companies themselves.

Since the Napster era, media companies have demanded that tech platforms figure out how to limit the use and copying of media files after they were delivered to our computers. They believed that there was some practical way to make a computer that would refuse to take orders from its owner, such that you could (for example) “stream” a movie to a user without that being a “download.” The truth, of course is that all streams are downloads, because the only way to cause my screen to display a video file that is on your server is for your server to send that file to my computer.

“Streaming” is a consensus hallucination, and when a company claims to be giving you a “stream” that’s not a “download,” they really mean that they believe that the program that’s rendering the file on your screen doesn’t have a “save as” button.

But of course, even if the program doesn’t have a “save as” button, someone could easily make a “save as” plugin that adds that functionality to your streaming program. So “streaming” isn’t just “a video playback program without a ‘save as’ button,” it’s also “a video playback program that no one can add a ‘save as’ button to.”

At the turn of the millennium, tech companies selling this stuff hoodwinked media companies by claiming that they used technical means to prevent someone from adding the “save as” button after the fact. But tech companies knew that there was no technical means to prevent this, because computers are general purpose, and can run every program, which means that every 10-foot fence you build around a program immediately summons up an 11-foot ladder.

When a tech company says “it’s impossible to change the programs and devices we ship to our users,” they mean, “it’s illegal to change the programs and devices we ship to our users.” That’s thanks to a cluster of laws we colloquially call “IP law”; a label we apply to any law that lets a firm exert control on the conduct of users, critics and competitors:

https://locusmag.com/2020/09/cory-doctorow-ip/

Law, not technology, is the true battlefield in the War on General Purpose Computing, a subject I’ve been raising the alarm about for decades now:

https://memex.craphound.com/2012/01/10/lockdown-the-coming-war-on-general-purpose-computing/

When I say that this is a legal fight and not a technical one, I mean that, but for the legal restrictions on reverse-engineering and “adversarial interoperability,” none of these extractive tactics would be viable. Every time a company enshittified its products, it would create an opportunity for a rival to swoop in, disenshittify the enshittification, and steal your customers out from under you.

The fact that there’s no technical way to enforce these restrictions means that the companies that benefit from them have to pitch their arguments to lawmakers, not customers. If you have something that works, you use it in your sales pitch, like Signal, whose actual, working security is a big part of its appeal to users.

If you have something that doesn’t work, you use it in your lobbying pitch, like Apple, who justify their 30% ripoff app tax – which they can only charge because it’s a felony to reverse-engineer your iPhone so you can use a different app store – by telling lawmakers that locking down their platform is essential to the security and privacy of iPhone owners:

https://pluralistic.net/2024/01/12/youre-holding-it-wrong/#if-dishwashers-were-iphones

Apple and Google have a duopoly over mobile computing. Both companies use legal tactics to lock users into getting their apps from the companies’ own app stores, where they take 30 cents out of every dollar you spend, and where it’s against the rules to include any payment methods other than Google/Apple’s own payment systems.

This is a massive racket. It lets the companies extract hundreds of billions of dollars in rents. This drives up costs for their users and drives down profits for their suppliers. It lets the duopoly structure the entire mobile economy, acting as de facto market regulators. For example, the fact that Apple/Google exempt Uber and Lyft from the 30% app tax means that they – and they alone – can provide competitive ride-hailing services.

But though both companies extract the 30% app tax, they use very different mechanisms to maintain their lock on their users and on app makers. Apple uses digital locks, which lets it invoke IP law to criminalize anyone who reverse-engineers its systems and provides an easy way to install a better app store.

Google, on the other hand, uses a wide variety of contractual tactics to maintain its control, arm-twisting Android device makers and carriers into bundling its app store with every device, often with a locked bootloader that prevents users from adding new app stores after they pay for their devices.

But despite this, Google has always claimed that Android is the “open” alternative to the Apple “ecosystem,” principally on the strength that you can “sideload” an app. “Sideload” is a weird euphemism that the mobile duopoly came up with; it means “installing software without our permission,” which we used to just call “installing software” (because you don’t need a manufacturer’s permission to install software on your computer).

Now, Google has pulled a Darth Vader, changing the deal after the fact. They’ve announced that henceforth, you will only be able to sideload apps that come from developers who pay to be validated by Google and certified as good eggs. This has got people really angry, and justifiably so.

Last week, the repair hero Louis Rossmann posted a scorching video excoriating Google for the change:

https://www.youtube.com/watch?v=QBEKlIV_70E

In the video, Rossmann – who is now running an anti-enshittification group called Fulu – reminds us that our mobile devices aren’t phones, they’re computers and urges us not to use the term “sideloading,” because that’s conceding that there’s something about the fact that this computer can fit in your pocket that means that you shouldn’t be able to, you know, just install software.

Rossmann thinks that this is a cash grab, and he’s right – partially. He thinks that this is a way for Google to make money from forcing developers to join its certification program.

But that’s just small potatoes. The real cash grab is the hundreds of billions of dollars that Google stands to lose if we switch to third-party app stores and choke off the app tax.

That is an issue that is very much on Google’s mind right now, because Google lost a brutal antitrust case brought by Epic Games, makers of Fortnite:

https://pluralistic.net/2023/12/12/im-feeling-lucky/#hugger-mugger

Epic’s suit contended that Google had violated antitrust law by creating exclusivity deals with carriers and device makers that locked Android users into Google’s app store, which meant that Epic had to surrender 30% of its mobile earnings to Google.

Google lost that case – badly. It turns out that judges don’t like it when you deliberately destroy evidence:

https://www.legaldive.com/news/deleted-messages-google-antitrust-case-epic-games-deliberate-spoliation-donato/702306/

They say that when you find yourself in a hole, you should stop digging, but Google can’t put down the shovel. After the court ordered Google to open up its app store, the company just ignored the order, which is a thing that judges hate even more than destroying evidence:

https://www.justice.gov/atr/case/epic-games-inc-v-google-llc

So it was that last month, Google found itself with just two weeks to comply with the open app store order, or else:

https://www.theverge.com/news/717440/google-epic-open-play-store-emergency-stay

Google was ordered to make it possible to install new app stores as apps, so you could go into Google Play, search for a different app store, and, with a single click, install it on your phone, and switch to getting your apps from that store, rather than Google’s.

That’s what’s behind Google’s new ban on “sideloading”: this is a form of malicious compliance with the court orders stemming from its losses to Epic Games. In fact, it’s not even malicious compliance – it’s malicious noncompliance, a move that so obviously fails to satisfy the court order that I think it’s only a matter of time until Google gets hit with fines so large that they’ll actually affect Google’s operations.

In the meantime, Google’s story that this move is motivated by security it obviously bullshit. First of all, the argument that preventing users from installing software of their choosing is the only way to safeguard their privacy and security is bullshit when Apple uses it, and it’s bullshit when Google trots it out:

https://www.eff.org/document/letter-bruce-schneier-senate-judiciary-regarding-app-store-security

But even if you stipulate that Google is doing this to keep you safe, the story falls apart. After all, Google isn’t certifying apps, they’re certifying developers. This implies that the company can somehow predict whether a developer will do something malicious in the future.

This is obviously wrong. Indeed, Google itself is proof that this doesn’t work: the fact that a company has a “don’t be evil” motto at its outset is no guarantee that it won’t turn evil in the future.

There’s a long track record of merchants behaving in innocuous and beneficial ways to amass reputation capital, before blitzing the people who trust them with depraved criminality. This is a well-understood problem with reputation scores, dating back to the early days of eBay, when crooked sellers invented the tactic of listing and delivering a series of low-value items in order to amass a high reputation score, only to post a bunch of high-ticket scams, like dozens laptops at $1,000 each, which are never delivered, even as the seller walks away with tens of thousands of dollars.

More recently, we’ve seen this in supply chain attacks on open source software, where malicious actors spend a long time serving as helpful contributors, pushing out a string of minor, high-quality patches before one day pushing a backdoor or a ransomware package into widely used code:

https://arstechnica.com/security/2025/07/open-source-repositories-are-seeing-a-rash-of-supply-chain-attacks/

So the idea that Google can improve Android’s safety by certifying developers, rather than code, is obvious bullshit. No, this is just a pretext, a way to avoid complying with the court order in Epic and milking a few more billions of dollars in app taxes.

Google is no friend of the general purpose computer. They keep coming up with ways to invoke the law to punish people who install code that makes their Android devices serve their owners’ interests, at the expense of Google’s shareholders. It was just a couple years ago that we had to bully Google out of a plan to lock down browsers so they’d be as enshittified as apps, something Google sold as “feature parity”:

https://pluralistic.net/2023/08/02/self-incrimination/

Epic Games didn’t just sue Google, either. They also sued Apple – but Apple won, because it didn’t destroy evidence and make the judge angry at it. But Apple didn’t walk away unscathed – they were also ordered to loosen up control over their App Store, and they also failed to do so, with the effect that last spring, a federal judge threatened to imprison Apple executives:

https://pluralistic.net/2025/05/01/its-not-the-crime/#its-the-coverup

Neither Apple nor Google would exist without the modern miracle that is the general purpose computer. Both companies want to make sure no one else ever reaps the benefit of the Turing complete, universal von Neumann machine. Both companies are capable of coming up with endless narratives about how Turing completeness is incompatible with your privacy and security.

But it’s Google and Apple that stand in the way of our security and privacy. Though they may sometimes protects us against external threats, neither Google nor Apple will ever protect us from their own predatory instincts.

Source: Pluralistic: Darth Android (01 Sep 2025) – Pluralistic: Daily links from Cory Doctorow

The EU wants to know what you think about it keeping all your data for *cough* crime stuff.

The EU wants to save all your data, or as much as possible for as long as possible. To insult the victims of crime, they say that they want to do this to fight crime. How do you feel about the EU being turned into a surveillance society? Leave your voice in the link below.

Source: Data retention by service providers for criminal proceedings – impact assessment

Futurehome smart hub owners must pay new $117 subscription or lose access. Or use a different app (link on bottom) 

Smart home device maker Futurehome is forcing its customers’ hands by suddenly requiring a subscription for basic functionality of its products.

Launched in 2016, Futurehome’s Smarthub is marketed as a central hub for controlling Internet-connected devices in smart homes. For years, the Norwegian company sold its products, which also include smart thermostats, smart lighting, and smart fire and carbon monoxide alarms, for a one-time fee that included access to its companion app and cloud platform for control and automation. As of June 26, though, those core features require a 1,188 NOK (about $116.56) annual subscription fee, turning the smart home devices into dumb ones if users don’t pay up.

“You lose access to controlling devices, configuring; automations, modes, shortcuts, and energy services,” a company FAQ page says.

You also can’t get support from Futurehome without a subscription. “Most” paid features are inaccessible without a subscription, too, the FAQ from Futurehome, which claims to be in 38,000 households, says.

After June 26, customers had four weeks to continue using their devices as normal without a subscription. That grace period recently ended, and users now need a subscription for their smart devices to work properly.

[…]

The indebted company promised customers that the subscription fee would allow it to provide customers “better functionality, more security, and higher value in the solution you have already invested in,” reported Elektro247, a Norwegian news site covering the electrical industry, according to a Google-provided translation.

The problem is that customers expected a certain level of service and functionality when they bought Futurehome devices. And as of press time, Futurehome’s product pages don’t make the newfound subscription requirements apparent. Futurehome’s recent bankruptcy is also a reminder of the company’s instability, making further investments questionable.

[…]

Futurehome has fought efforts to crack its firmware, with CEO Øyvind Fries telling Norwegian consumer tech website Tek.no, per a Google translation, “It is regrettable that we now have to spend time and resources strengthening the security of a popular service rather than further developing functionality for the benefit of our customers.”

Futurehome’s move has become a common strategy among Internet of Things companies, including smart home hub maker Wink. These companies are still struggling to build sustainable businesses that work long-term without killing features or upcharging customers.

Source: Futurehome smart hub owners must pay new $117 subscription or lose access – Ars Technica

And you see this happening a lot with all kinds of companies. The thing is, these products are supposed to work without contacting a central server – the company selling you this is not supposed to be seeing or handling your data at all. They don’t need to, as it’s all in your home and the functionalities don’t require huge compute power.

Fortunately, the Futurehome Home Assistant add-on (on Github) is a complete drop-in replacement for the official Futurehome app, with support for all device types compatible with the Futurehome hub. See the FAQ for more details. – which means you can operate the stuff you bought without the subscription

TransUnion says hackers stole 4.4 million customers’ personal information (breached AGAIN!!!)

Credit reporting giant TransUnion has disclosed a data breach affecting more than 4.4 million customers’ personal information.

In a filing with Maine’s attorney general’s office on Thursday, TransUnion attributed the July 28 breach to unauthorized access of a third-party application storing customers’ personal data for its U.S. consumer support operations.

TransUnion claimed “no credit information was accessed,” but provided no immediate evidence for its claim. The data breach notice did not specify what specific types of personal data were stolen.

In a separate data breach disclosure filed later on Thursday with Texas’ attorney general’s office, TransUnion confirmed that the stolen personal information includes customers’ names, dates of birth, and Social Security numbers.

[…]

TransUnion is one of the largest credit reporting agencies in the United States, and stores the financial data of more than 260 million Americans. It’s the latest U.S. corporate giant to have been hacked in recent weeks following a wave of hacks targeting the insurance, retail, and transportation and airline industries.

[…]

Source: TransUnion says hackers stole 4.4 million customers’ personal information | TechCrunch

Well done Transunion. In 2023 it lost a massive data dump (which they accept and then say no, wasn’t us) and in 2017 it got it’s customers to download malware (and again said, yes it was us but it wasn’t). You would think that at some point they would learn, but the penalties are apparently too small to care.

And considering it actually says that they verify personal identities, and sell identity protection services – and who knows if those “customers” actually know that that they are customers – the quantity and scale of these breaches is simply unacceptable. The company can obviously not handle it’s tasking and should by now be broken down.