The Linkielist

Linking ideas with the world

The Linkielist

Newly Granted Nintendo Patents An ‘Embarrassing Failure’ By The USPTO, Says Patent Attorney

As you will hopefully recall, that very strange patent lawsuit between Nintendo and PocketPair over the latter’s hit game, Palworld, is ongoing. At the heart of that case is a series of overly broad patents for what are generally considered generic game mechanics that also have a bunch of prior art from before their use by Nintendo in its Pokémon games. These include concepts like throwing a capture item at an NPC to collect a character, as well as riding and mounting/dismounting NPCs in an open world setting. The result, even as the litigation is ongoing, has been PocketPair patching out several of these game mechanics from its game in order to protect itself. That it feels this is necessary as a result of these broad patents is unfortunate.

And, because of the failure of the USPTO to do its job, it seems things will only get worse. Nintendo was awarded two additional patents in just the past couple of weeks and those patents are being called an “embarrassing failure” by patent attorney Kirk Sigmon.

The last 10 days have brought a string of patent wins for Nintendo. Yesterday, the company was granted US patent 12,409,387, a patent covering riding and flying systems similar to those Nintendo has been criticized for claiming in its Palworld lawsuit (via Gamesfray). Last week, however, Nintendo received a more troubling weapon in its legal arsenal: US patent 12,403,397, a patent on summoning and battling characters that the United States Patent and Trademark Office granted with alarmingly little resistance.

According to videogame patent lawyer Kirk Sigmon, the USPTO granting Nintendo these latest patents isn’t just a moment of questionable legal theory. It’s an indictment of American patent law.

[…]

Sigmon notes that both patents are for mechanics and concepts that ought to be obvious to anyone with a reasonable amount of skill in this industry, which ought to have made them ineligible to be patented. That standard of patent law only works, however, if the USPTO acts as a true interlocutor during the filing process. In both of these cases, though, the USPTO appears to have not been in the mood to do their jobs.

Sigmon notes that it is common for patent applications like this to show some amount of questioning or pushback from the examiner. In both of these cases, that seemed almost entirely absent from the process, especially for patent ‘397.

[…]

When the claims were ultimately allowed, the only reasoning the USPTO offered was a block quote of text from the claims themselves.

The ‘397 patent granted last week is even more striking. It’s a patent on summoning and battling with “sub-characters,” using specific language suggesting it’s based on the Let’s Go! mechanics in the Pokémon Scarlet and Violet games. Despite its relevance to a conceit in countless games—calling characters to battle enemies for you—it was allowed without any pushback whatsoever from the USPTO, which Sigmon said is essentially unheard of.

“Like the above case, the reasons for allowance don’t give us even a hint of why it was allowed: the Examiner just paraphrases the claims (after block quoting them) without explaining why the claims are allowed over the prior art,” Sigmon said. “This is extremely unusual and raises a large number of red flags.”

[…]

with the Palworld example fresh in our minds, we do certainly know what the granting of patents like this will result in: more patent bullying by Nintendo.

“Pragmatically speaking, though, it’s not impossible to be sued for patent infringement even when a claim infringement argument is weak, and bad patents like this cast a massive shadow on the industry,” Sigmon said.

For a company at Nintendo’s scale, the claims of the ‘397 patent don’t need to make for a strong argument that would hold up in court. The threat of a lawsuit can stifle competition well enough on its own when it would cost millions of dollars to defend against.

And in the current environment, where challenging bad patents has become essentially pointless, you can bet we’ll see Nintendo wielding these patents against competitors in the near future.

Source: Newly Granted Nintendo Patents An ‘Embarrassing Failure’ By The USPTO, Says Patent Attorney | Techdirt

Swiss government may disable privacy tech, stoking fears of mass surveillance

The Swiss government could soon require service providers with more than 5,000 users to collect government-issued identification, retain subscriber data for six months and, in many cases, disable encryption.

The proposal, which is not subject to parliamentary approval, has alarmed privacy and digital-freedoms advocates worldwide because of how it will destroy anonymity online, including for people located outside of Switzerland.

A large number of virtual private network (VPN) companies and other privacy-preserving firms are headquartered in the country because it has historically had liberal digital privacy laws alongside its famously discreet banking ecosystem.

Proton, which offers secure and end-to-end encrypted email along with an ultra-private VPN and cloud storage, announced on July 23 that it is moving most of its physical infrastructure out of Switzerland due to the proposed law.

The company is investing more than €100 million in the European Union, the announcement said, and plans to help develop a “sovereign EuroStack for the future of our home continent.” Switzerland is not a member of the EU.

Proton said the decision was prompted by the Swiss government’s attempt to “introduce mass surveillance.”

Proton founder and CEO Andy Yen told Radio Télévision Suisse (RTS) that the suggested regulation would be illegal in the EU and United States.

“The only country in Europe with a roughly equivalent law is Russia,” Yen said.

[…]

Internet users would no longer be able to register for a service with just an email address or anonymously and would instead have to provide their passport, drivers license or another official ID to subscribe, said Chloé Berthélémy, senior policy adviser at European Digital Rights (eDRI), an association of civil and human rights organizations from across Europe.

The regulation also includes a mass data retention obligation requiring that service providers keep users’ email addresses, phone numbers and names along with IP addresses and device port numbers for six months, Berthélémy said. Port numbers are unique identifiers that send data to a specific application or service on a computer.

All authorities would need to do to obtain the data, Berthélémy said, is make a simple request that would circumvent existing legal control mechanisms such as court orders.

“The right to anonymity is supporting a very wide range of communities and individuals who are seeking safety online,” Berthélémy said.

“In a world where we have increasing attacks from governments on specific minority groups, on human rights defenders, journalists, any kind of watchdogs and anyone who holds those in power accountable, it’s very crucial that we … preserve our privacy online in order to do those very crucial missions.”

Source: Swiss government looks to undercut privacy tech, stoking fears of mass surveillance | The Record from Recorded Future News

Spotify pissy after 10,000 users sold their own data to build AI tools

For millions of Spotify users, the “Wrapped” feature—which crunches the numbers on their annual listening habits—is a highlight of every year’s end, ever since it debuted in 2015. NPR once broke down exactly why our brains find the feature so “irresistible,” while Cosmopolitan last year declared that sharing Wrapped screenshots of top artists and songs had by now become “the ultimate status symbol” for tens of millions of music fans.

It’s no surprise then that, after a decade, some Spotify users who are especially eager to see Wrapped evolve are no longer willing to wait to see if Spotify will ever deliver the more creative streaming insights they crave.

With the help of AI, these users expect that their data can be more quickly analyzed to potentially uncover overlooked or never-considered patterns that could offer even more insights into what their listening habits say about them.

Imagine, for example, accessing a music recap that encapsulates a user’s full listening history—not just their top songs and artists.

[…]

In pursuit of supporting developers offering novel insights like these, more than 18,000 Spotify users have joined “Unwrapped,” a collective launched in February that allows them to pool and monetize their data.

Voting as a group through the decentralized data platform Vana—which Wired profiled earlier this year—these users can elect to sell their dataset to developers who are building AI tools offering fresh ways for users to analyze streaming data in ways that Spotify likely couldn’t or wouldn’t.

In June, the group made its first sale, with 99.5 percent of members voting yes. Vana co-founder Anna Kazlauskas told Ars that the collective—at the time about 10,000 members strong—sold a “small portion” of its data (users’ artist preferences) for $55,000 to Solo AI.

While each Spotify user only earned about $5 in cryptocurrency tokens—which Kazlauskas suggested was not “ideal,” wishing the users had earned about “a hundred times” more—she said the deal was “meaningful” in showing Spotify users that their data “is actually worth something.”

“I think this is what shows how these pools of data really act like a labor union,” Kazlauskas said. “A single Spotify user, you’re not going to be able to go say like, ‘Hey, I want to sell you my individual data.’ You actually need enough of a pool to sort of make it work.”

[…]

Spotify is not happy about Unwrapped, which is perhaps a little too closely named to its popular branded feature for the streaming giant’s comfort. A spokesperson told Ars that Spotify sent a letter to the contact info listed for Unwrapped developers on their site, outlining concerns that the collective could be infringing on Spotify’s Wrapped trademark.

Further, the letter warned that Unwrapped violates Spotify’s developer policy, which bans using the Spotify platform or any Spotify content to build machine learning or AI models. And developers may also be violating terms by facilitating users’ sale of streaming data.

“Spotify honors our users’ privacy rights, including the right of portability,” Spotify’s spokesperson said. “All of our users can receive a copy of their personal data to use as they see fit. That said, UnwrappedData.org is in violation of our Developer Terms which prohibit the collection, aggregation, and sale of Spotify user data to third parties.”

But while Spotify suggests it has already taken steps to stop Unwrapped, the Unwrapped team told Ars that it never received any communication from Spotify. It plans to defend users’ right to “access, control, and benefit from their own data,” its statement said, while providing reassurances that it will “respect Spotify’s position as a global music leader.”

Unwrapped “does not distribute Spotify’s content, nor does it interfere with Spotify’s business,” developers argued. “What it provides is community-owned infrastructure that allows individuals to exercise rights they already hold under widely recognized data protection frameworks—rights to access their own listening history, preferences, and usage data.”

“When listeners choose to share or monetize their data together, they are not taking anything away from Spotify,” developers said. “They are simply exercising digital self-determination. To suggest otherwise is to claim that users do not truly own their data—that Spotify owns it for them.”

Jacob Hoffman-Andrews, a senior staff technologist for the digital rights group the Electronic Frontier Foundation, told Ars that—while EFF objects to data dividend schemes “where users are encouraged to share personal information in exchange for payment”—Spotify users should nevertheless always maintain control of their data.

“In general, listeners should have control of their own data, which includes exporting it for their own use,” Hoffman-Andrews said. “An individual’s musical history is of use not just to Spotify but also to the individual who created it. And there’s a long history of services that enable this sort of data portability, for instance Last.fm, which integrates with Spotify and many other services.”

[…]

“This is the heart of the issue: If Spotify seeks to restrict or penalize people for exercising these rights, it sends a chilling message that its listeners should have no say in how their own data is used,” the Unwrapped team’s statement said. “That is out of step not only with privacy law, but with the values of transparency, fairness, and community-driven innovation that define the next era of the Internet.”

Unwrapped sign-ups limited due to alleged Spotify issues

There could be more interest in Unwrapped. But Kazlauskas alleged to Ars that in the more than six months since Unwrapped’s launch, “Spotify has made it extraordinarily difficult” for users to port over their data. She claimed that developers have found that “every time they have an easy way for users to get their data,” Spotify shuts it down “in some way.”

Supposedly because of Spotify’s interference, Unwrapped remains in an early launch phase and can only offer limited spots for new users seeking to sell their data. Kazlauskas told Ars that about 300 users can be added each day due to the cumbersome and allegedly shifting process for porting over data.

Currently, however, Unwrapped is working on an update that could make that process more stable, Kazlauskas said, as well as changes to help users regularly update their streaming data. Those updates could perhaps attract more users to the collective.

[…]

Source: Spotify peeved after 10,000 users sold data to build AI tools – Ars Technica

Proton Mail Suspended Journalist Accounts at Request of some Cybersecurity Agency without any process

The company behind the Proton Mail email service, Proton, describes itself as a “neutral and safe haven for your personal data, committed to defending your freedom.”

But last month, Proton disabled email accounts belonging to journalists reporting on security breaches of various South Korean government computer systems following a complaint by an unspecified cybersecurity agency. After a public outcry, and multiple weeks, the journalists’ accounts were eventually reinstated — but the reporters and editors involved still want answers on how and why Proton decided to shut down the accounts in the first place.

Martin Shelton, deputy director of digital security at the Freedom of the Press Foundation, highlighted that numerous newsrooms use Proton’s services as alternatives to something like Gmail “specifically to avoid situations like this,” pointing out that “While it’s good to see that Proton is reconsidering account suspensions, journalists are among the users who need these and similar tools most.” Newsrooms like The Intercept, the Boston Globe, and the Tampa Bay Times all rely on Proton Mail for emailed tip submissions.

Shelton noted that perhaps Proton should “prioritize responding to journalists about account suspensions privately, rather than when they go viral.”

On Reddit, Proton’s official account stated that “Proton did not knowingly block journalists’ email accounts” and that the “situation has unfortunately been blown out of proportion.” Proton did not respond to The Intercept’s request for comment.

The two journalists whose accounts were disabled were working on an article published in the August issue of the long-running hacker zine Phrack. The story described how a sophisticated hacking operation — what’s known in cybersecurity parlance as an APT, or advanced persistent threat — had wormed its way into a number of South Korean computer networks, including those of the Ministry of Foreign Affairs and the military Defense Counterintelligence Command, or DCC.

The journalists, who published their story under the names Saber and cyb0rg, describe the hack as being consistent with the work of Kimsuky, a notorious North Korean state-backed APT sanctioned by the U.S. Treasury Department in 2023.

As they pieced the story together, emails viewed by The Intercept show that the authors followed cybersecurity best practices and conducted what’s known as responsible disclosure: notifying affected parties that a vulnerability has been discovered in their systems prior to publicizing the incident.

Saber and cyb0rg created a dedicated Proton Mail account to coordinate the responsible disclosures, then proceeded to notify the impacted parties, including the Ministry of Foreign Affairs and the DCC, and also notified South Korean cybersecurity organizations like the Korea Internet and Security Agency, and KrCERT/CC, the state-sponsored Computer Emergency Response Team. According to emails viewed by The Intercept, KrCERT wrote back to the authors, thanking them for their disclosure.

A note on cybersecurity jargon: CERTs are agencies consisting of cybersecurity experts specializing in dealing with and responding to security incidents. CERTs exist in over 70 countries — with some countries having multiple CERTs each specializing in a particular field such as the financial sector — and may be government-sponsored or private organizations. They adhere to a set of formal technical standards, such as being expected to react to reported cybersecurity threats and security incidents. A high-profile example of a CERT agency in the U.S. is the Cybersecurity and Infrastructure Agency, which has recently been gutted by the Trump administration.

A week after the print issue of Phrack came out, and a few days before the digital version was released, Saber and cyb0rg found that the Proton account they had set up for the responsible disclosure notifications had been suspended. A day later, Saber discovered that his personal Proton Mail account had also been suspended. Phrack posted a timeline of the account suspensions at the top of the published article, and later highlighted the timeline in a viral social media post. Both accounts were suspended owing to an unspecified “potential policy violation,” according to screenshots of account login attempts reviewed by The Intercept.

The suspension notice instructed the authors to fill out Proton’s abuse appeals form if they believed the suspension was in error. Saber did so, and received a reply from a member of Proton Mail’s Abuse Team who went by the name Dante.

In an email viewed by The Intercept, Dante told Saber that their account “has been disabled as a result of a direct connection to an account that was taken down due to violations of our terms and conditions while being used in a malicious manner.” Dante also provided a link to Proton’s terms of service, going on to state, “We have clearly indicated that any account used for unauthorized activities, will be sanctioned accordingly.” The response concluded by stating, “We consider that allowing access to your account will cause further damage to our service, therefore we will keep the account suspended.”

On August 22, a Phrack editors reached out to Proton, writing that no hacked data was passed through the suspended email accounts, and asked if the account suspension incident could be deescalated. After receiving no response from Proton, the editor sent a follow-up email on September 6. Proton once again did not reply to the email.

On September 9, the official Phrack X account made a post asking Proton’s official account asking why Proton was “cancelling journalists and ghosting us,” adding: “need help calibrating your moral compass?” The post quickly went viral, garnering over 150,000 views.

Proton’s official account replied the following day, stating that Proton had been “alerted by a CERT that certain accounts were being misused by hackers in violation of Proton’s Terms of Service. This led to a cluster of accounts being disabled. Our team is now reviewing these cases individually to determine if any can be restored.” Proton then stated that they “stand with journalists” but “cannot see the content of accounts and therefore cannot always know when anti-abuse measures may inadvertently affect legitimate activism.”

Proton did not publicly specify which CERT had alerted them, and didn’t answer The Intercept’s request for the name of the specific CERT which had sent the alert. KrCERT also did not reply to The Intercept’s question about whether they were the CERT that had sent the alert to Proton.

Later in the day, Proton’s founder and CEO Andy Yen posted on X that the two accounts had been reinstated. Neither Yen nor Proton explained why the accounts had been reinstated, whether they had been found to not violate the terms of service after all, why had they been suspended in the first place, or why a member of the Proton Abuse Team reiterated that the accounts had violated the terms of service during Saber’s appeals process.

Phrack noted that the account suspensions created a “real impact to the author. The author was unable to answer media requests about the article.” The co-authors, Phrack pointed out, were also in the midst of the responsible disclosure process and working together with the various affected South Korean organizations to help fix their systems. “All this was denied and ruined by Proton,” Phrack stated.

Phrack editors said that the incident leaves them “concerned what this means to other whistleblowers or journalists. The community needs assurance that Proton does not disable accounts unless Proton has a court order or the crime (or ToS violation) is apparent.”

Source: Proton Mail Suspended Journalist Accounts at Request of Cybersecurity Agency

If Proton can’t view the content of accounts, how did Proton verify some random CERTs claims to make the decision to close the accounts? And how did Proton review to see if they could be restored? Is it Proton policy to decide that people are guilty before proven innocent? This attitude justifies people blowing up about this incident – because it shows how vulnerable they are to random whims of Proton instead of any kind of transparent diligent process.

Revanced looking for legal help from Spotify

ReVanced has received a DMCA takedown notice from Spotify regarding the Unlock Premium patch.

Spotify claims that

  • The patch is a derivative of their copyrighted works, and
  • It circumvents Spotify’s technological protection measures under DMCA 1201(a) (such as encryption and transfer key protocols).

Find the full DMCA notice here.

Their arguments

    • They say the patch enables access to copyrighted content by bypassing encryption, transfer key protocols, and premium feature restrictions (like skipping).
    • They argue this is circumvention, even though the patch does not enable downloads or give access to songs that are otherwise unavailable on free Spotify.

Our understanding

      • The patch does not copy Spotify’s code.
      • Songs on Spotify Free remain accessible without the patch; premium-only features mainly affect convenience (e.g., skipping).
      • The app requires attestation to prevent it from becoming unusable if modified.
      • However, legal precedent (e.g., 321 Studios v. MGM, MDY v. Blizzard) shows courts sometimes view bypassing software restrictions as circumvention, even when it’s about features and not direct access to copyrighted works.

Why this matters

    • If attestation bypass alone constitutes a DMCA 1201 violation, then not only premium but also the “make the app work when patched” functionality could be affected.

We are seeking legal expertise to better understand our position and risks until our deadline of one business day.

    If you have legal knowledge in copyright/DMCA or know someone who does, to guide us in this matter, please reach out to us:

    • (Preferred) Directly on social media sites (Discord preferred), you can find on the footer of this page.
    • Via mail at spotify-dmca@revanced.app.

We beat Chat Control but the fight isn’t over – another surveillance law that mandates companies to save user data for Europol is making its way right now and there is less than 24 hours to give the EU feedback!

Please follow this link to the questionnaire and help save our future – otherwise total surveillance like never seen before will strip you of every privacy and later fundamental rights you have as a EU citizen

++++++++++++++++++++++++++++

Information

The previous data retention law was declared illegal in 2014 by CJEU (EU’s highest court) for being mass surveillance and violating human rights.

Since most EU states refused to follow the court order and the EU commission refused to enforce it, CJEU recently caved in to political pressure and changed their stance on mass surveillance, making it legal.

And that instantly spawned this data retention law that is more far fetching than the original, that was deemed illegal. Here you can read the entire plan that EU is following. Briefly:

they want to sanction unlicensed messaging apps, hosting services and websites that don’t spy on users (and impose criminal penalties)

mandatory data retention, all your online activity must be tied to your identity

end of privacy friendly VPN’s and other services

cooperate with hardware manufacturers to ensure lawful access by design (backdoors for phones and computers)

prison for everybody who doesn’t comply

If you don’t know what the best options for some questions are, privacy wise, check out this answering guide by Edri(european digital rights organization)

Source: https://www.reddit.com/r/BuyFromEU/comments/1neecov/we_beat_chat_control_but_the_fight_isnt_over/

Microsoft software reselling dispute heads back to UK court

Microsoft’s tussle with UK-based reseller ValueLicensing over the sale of secondhand licenses returns to the UK’s Competition Appeal Tribunal this week, with the Windows behemoth now claiming that selling pre-owned Office and Windows software is unlawful.

ValueLicensing’s representatives say this week’s trial – due to start tomorrow – will “address whether the entire pre-owned license market was lawful – with Microsoft arguing that it was not lawful to resell pre-owned Office and Windows software at all.”

This stems from a May 2025 agreement that the scope of copyright issues now central to Microsoft’s defense needs to be determined.

The case has the potential to blow a hole in the European reselling market. According to ValueLicensing, “if Microsoft’s argument is correct, it would mean that the entire resale market in Europe should not exist.”

The ValueLicensing case has rumbled on for years, beginning with allegations that Microsoft stifled the supply of pre-owned licenses by offering attractive subscription deals to public and private sector organizations in return for the surrender of perpetual licenses. ValueLicensing (and companies like it) operated a business model based on organizations selling their perpetual licenses and resellers selling them on to customers at a discount.

ValueLicensing alleged that Microsoft added clauses to customer contracts aimed at restricting the resale of perpetual licenses. In return for accepting those contracts, customers were given a discount.

Judging by the case so far [PDF], it appears that this practice was a policy at Microsoft.

According to ValueLicensing, Microsoft’s allegedly anti-competitive antics and attempts to eliminate the secondhand software license market have cost it £270 million in lost profits.

Microsoft’s argument [PDF] is that it owns the copyright to the non-program bits of Office – the graphical user interface, for example – to which rules around software reselling (the European Software Directive) do not apply.

ValueLicensing boss Jonathan Horley noted the timing of the copyright claim. “It’s a remarkable coincidence that their defense against ValueLicensing has changed so dramatically from being a defense of ‘we didn’t do it’ to a defense of ‘the market should never have existed,'” he said.

Microsoft’s contention is not without precedent. The Tom Kabinet judgment drew a line between the secondary market for software programs and e-books. Reselling a software program isn’t a problem, while reselling something like an e-book is. Microsoft’s argument for its software appears to be similar.

The tech giant is facing other actions before the UK’s Competition Appeal Tribunal. Alexander Wolfson has brought a similar claim against Microsoft, potentially worth billions, regarding the purchase of certain licenses for specific products. Dr Maria Luisa Stasi has brought another regarding the cost of running Microsoft software on platforms like AWS and GCP compared to Azure.

Source: Microsoft software reselling dispute heads back to UK court • The Register

So if Microsoft wins, it means you don’t actually own a copy of the software you paid for.

Did Apple do an Anthropic? Faces lawsuit over alleged use of pirated books for AI training

Two authors have filed a lawsuit against Apple, accusing the company of infringing on their copyright by using their books to train its artificial intelligence model without their consent. The plaintiffs, Grady Hendrix and Jennifer Roberson, claimed that Apple used a dataset of pirated copyrighted books that include their works for AI training. They said in their complaint that Applebot, the company’s scraper, can “reach ‘shadow libraries'” made up of unlicensed copyrighted books, including (on information) their own. The lawsuit is currently seeking class action status, due to the sheer number of books and authors found in shadow libraries.

The main plaintiffs for the lawsuit are Grady Hendrix and Jennifer Roberson, both of whom have multiple books under their names. They said that Apple, one of the biggest companies in the world, did not attempt to pay them for “their contributions to [the] potentially lucrative venture.”

[…]

Anthropic, the AI company behind the Claude chatbot, recently agreed to pay $1.5 billion to settle a class action piracy complaint also brought by authors. Similar to this case, the writers also accused the company of taking pirated books from online libraries to train its AI technology. The 500,000 authors involved in the case will reportedly get $3,000 per work.

Source: Apple faces lawsuit over alleged use of pirated books for AI training

18 popular VPNs turn out to belong to 3 different owners – and contain insecurities as well

A new peer-reviewed study alleges that 18 of the 100 most-downloaded virtual private network (VPN) apps on the Google Play Store are secretly connected in three large families, despite claiming to be independent providers. The paper doesn’t indict any of our picks for the best VPN, but the services it investigates are popular, with 700 million collective downloads on Android alone.

The study, published in the journal of the Privacy Enhancing Technologies Symposium (PETS), doesn’t just find that the VPNs in question failed to disclose behind-the-scenes relationships, but also that their shared infrastructures contain serious security flaws. Well-known services like Turbo VPN, VPN Proxy Master and X-VPN were found to be vulnerable to attacks capable of exposing a user’s browsing activity and injecting corrupted data.

Titled “Hidden Links: Analyzing Secret Families of VPN apps,” the paper was inspired by an investigation by VPN Pro, which found that several VPN companies each were selling multiple apps without identifying the connections between them. This spurred the “Hidden Links” researchers to ask whether the relationships between secretly co-owned VPNs could be documented systematically.

[…]

Family A consists of Turbo VPN, Turbo VPN Lite, VPN Monster, VPN Proxy Master, VPN Proxy Master Lite, Snap VPN, Robot VPN and SuperNet VPN. These were found to be shared between three providers — Innovative Connecting, Lemon Clove and Autumn Breeze. All three have all been linked to Qihoo 360, a firm based in mainland China and identified as a “Chinese military company” by the US Department of Defense.

Family B consists of Global VPN, XY VPN, Super Z VPN, Touch VPN, VPN ProMaster, 3X VPN, VPN Inf and Melon VPN. These eight services, which are shared between five providers, all use the same IP addresses from the same hosting company.

Family C consists of X-VPN and Fast Potato VPN. Although these two apps each come from a different provider, the researchers found that both used very similar code and included the same custom VPN protocol.

If you’re a VPN user, this study should concern you for two reasons. The first problem is that companies entrusted with your private activities and personal data are not being honest about where they’re based, who owns them or who they might be sharing your sensitive information with. Even if their apps were all perfect, this would be a severe breach of trust.

But their apps are far from perfect, which is the second problem. All 18 VPNs across all three families use the Shadowsocks protocol with a hard-coded password, which makes them susceptible to takeover from both the server side (which can be used for malware attacks) and the client side (which can be used to eavesdrop on web activity).

[…]

 

Source: Researchers find alarming overlaps among 18 popular VPNs

Top German court says maybe the Web should be more like television in order to protect copyright and intrusive business models

Back in 2022, Walled Culture wrote about a legal case involving ad blockers. These are hugely popular programs: according to recent statistics, around one billion people use ad blockers when they are online. That’s a testament to the importance many people attach to being in control of their browser experience, and to a wide dislike of the ads they are forced to view. The 2022 case concerned a long-running attempt by the German media publishing giant Axel Springer to sue Eyeo, the makers of the widely-used AdBlock Plus program. Springer was trying to force people to view the ads on its sites, whether they are wanted or not, and argued that ad blocking programs were illegal. Springer lost every one of its many court cases trying to establish this, but refused to give up on its quixotic quest. It appealed to the German Federal Supreme Court, which has unfortunately sent the case back to the lower court. As a post on the Mozilla blog explains:

The BGH (as the Federal Supreme Court is known) called for a new hearing so that the Hamburg court can provide more detail regarding which part of the website (such as bytecode or object code) is altered by ad blockers, whether this code is protected by copyright, and under what conditions the interference might be justified.

The full impact of this latest development is still unclear. The BGH will issue a more detailed written ruling explaining its decision. Meanwhile, the case has now returned to the lower court for additional fact-finding. It could be a couple more years until we have a clear answer.

Springer’s argument was that a Web page is actually a kind of program, and as such was protected by copyright. An ad blocker installed in a browser, Springer maintained, infringed on its copyright by modifying that Web page program without permission. This is a novel way of looking at browsers and the Web pages they display. For the last 35 years, Web pages have been regarded as an arrangement of raw data in the form of text, images, sounds etc. The Web browser is a specialised program for displaying that data in various formats, controlled by the user. Springer is asserting something far reaching: that a Web page is itself a program that must be run “as is”, and not modified by a Web browser and its add-ons without the explicit permission of the page’s copyright holder.

As the Mozilla blog post points out, if the German courts ultimately adopt this position, the implications would be profound, because this would affect not just ad blockers. There are many other reasons why people use tools like browser extensions to modify Web pages before they are displayed:

These include changes to improve accessibility, to evaluate accessibility, or to protect privacy. Indeed, the risks of browsing range from phishing, to malicious code execution, to invasive tracking, to fingerprinting, to more mundane harms like inefficient website elements that waste processing resources. Users should be equipped with browsers and browser extensions that give them both protection and choice in the face of these risks. A browser that inflexibly ran any code served to the user would be an extraordinarily dangerous piece of software. [Emphasis in original]

Springer’s argument is an attack on the very concept of what a Web browser does. The German publisher wants the browser and extensions to be under the Web page author’s control, with the browser user reduced to a passive viewer. It effectively turns the Web into a form of television, with Web page “broadcasts” that can’t be modified in any significant ways. Mozilla rightly warns:

Such a precedent could embolden legal challenges against other extensions that protect privacy, enhance accessibility, or improve security. Over time, this could deter innovation in these areas, pressure browser vendors to limit extension functionality, and shift the internet away from its open, user-driven nature toward one with reduced flexibility, innovation, and control for users.

In the wider context of copyright, there are two aspects worth noting. One is that Springer is using copyright not to protect creativity, but to enforce its business model – online advertising – after losing multiple court cases that it had brought based on competition law. The other point is that Springer’s argument is only possible because copyright was extended to computer programs some years ago. That was not an inevitable decision, since it could be argued that computer code lacks the human, expressive nature of texts, images or music. It’s true that different coders have different styles that may be visible in their output, but those differences are hardly on the same level as a Shakespeare sonnet, a self-portrait by Rembrandt, or a Beethoven string quartet. To afford them the same protection was a mistake, and a product of the copyright industry’s successful campaign to expand this powerful intellectual monopoly protection to more fields, however inappropriately.

In the present case it can be seen how dangerous this mindless maximalist approach is. If the lower German court accepts Springer’s argument, after it has carried out its fact finding, it would chill real Internet innovation for the sake of protecting a deeply-flawed and failing business model that has nothing to do with life-enhancing creativity, but is all about eliminating choice and agency. Although such a result would only apply in Germany, and would in any case be hard to enforce, the EU legal system and the global nature of the Web means it could have wider knock-on effects. Let’s hope it doesn’t come to that.

Source: Top German court says maybe the Web should be more like television in order to protect copyright – Walled Culture

So Spotify Public Links Now Show Your Personal Information. You Need to Disable Spotify DMs To Get Rid Of It.

Spotify wants to be yet another messaging platform, but its new DM system has a quirk that makes me hesitant to recommend it. Spotify used to be a non-identity based platform, but things changed once it added messaging. Now, the Spotify DM system is attaching account information to song links and putting it in front of users’ eyes. That means it can accidentally leak the name and profile picture of whoever shared a link, even if they didn’t intend to give out their account information, too. Thankfully there’s a way to make links more private, and to disable Spotify DMs altogether.

How Spotify is accidentally leaking users’ information

It all starts with tracking URLs. Many major companies on the web use these. They embed information at the end of a URL to track where clicks on it came from. Which website, which page, or in Spotify’s case, which user. If you’ve generated a Share link for a song or playlist in the past, it contained your user identity string at the end. And when someone accessed and acted on that link, by adding the song or playing it, your account information was saved in their account’s identity as a connection of sorts. Maybe a little invasive, but because users couldn’t do much with that information, it was mostly just a way for Spotify to track how often people were sharing music between each other.

Before, this happened in the background and no one really cared. But with the new Spotify DM feature, connections made via tracking links are suddenly being put front and center right before users’ eyes. As spotted by Reddit user u/sporoni122, these connections are now showing up in a “Suggested” section when using Spotify DMs, even if you just happened to click on a public link once and never heard of the person who shared it. Alternatively, you might have shared a link in the past, and could be shown account information for people who clicked on it.

Even if an account is public, I could see how this would be annoying. Imagine you share a song in a Discord server where you go by an anonymous name, but someone clicks on it and finds your Spotify account, where you might go by your real name. Bam, they suddenly know who you are.

Reddit user u/Reeceeboii added that Spotify is using this URL tracking behavior to populate a list of songs and playlists shared between two users even if they happened via third-party messaging services like WhatsApp.

So, if you don’t want others to find your Spotify account through your shared songs, what do you do? Well, before posting in anonymous communities like Discord or X, try cleaning up your links first.

My colleagues and I have previously written about how you can remove tracking information from a URL automatically on iPhone, how you can use a Mac app to clean links without any effort, or how you can use an all-in one extension to get the job done regardless of platform. You can also use a website like Link Cleaner to clean up your links.

Or you can take the manual approach. In your Spotify link, remove everything at the end starting with the question mark.

What do you think so far?

So this tracked link:

https://open.spotify.com/playlist/74BUi79BzFKW7IVJBShrFD?si=28575ba800324

Becomes this clean link:

https://open.spotify.com/playlist/74BUi79BzFKW7IVJBShrFD

Here, the part with “si=“ is your identifier. Of course, if it’s a playlist you’re sharing, it will still show your name and your profile picture—that’s how the platform has always worked. So if you want to stay truly anonymous, you’ll want to keep your playlists private.

How to disable Spotify DMs

If you don’t see yourself using Spotify DMs, it might also be a good idea to just get rid of them entirely. You’ll probably still want to remove tracking information from your URLs before sharing, just for due diligence. But if you don’t want to worry about getting DMs on Spotify or having your account show up as a Suggested contact to strangers, you should also go to Settings > Privacy and social > Social features and disable Messages. That’ll opt you out of the DM feature altogether.

Disable Spotify DM.
Credit: Michelle Ehrhardt

Source: If You’ve Ever Shared a Spotify Link Publicly, You Need to Disable Spotify DMs

EU Google antitrust penalty halted by low level commissioner amid Trump’s tariff threats

Source: EU Google antitrust penalty halted amid Trump’s tariff threats – POLITICO

Age verification legislation is tanking traffic to sites that comply, and rewarding those that don’t

A new report suggests that the UK’s age verification measures may be having unforeseen knock-on effects on web traffic, with the real winners being sites that flout the law entirely.

[…]

Sure, there are ways around this if you’d rather not feed your personal data to a platform’s third-party age verification vendor. However, sites are seeing more significant consequences beyond just locking you out of your DMs. For a start, The Washington post reports web traffic to pornography sites implementing age verification has taken a totally predictable hit—but those flouting the new age check requirements have seen traffic as much as triple compared to the same time last year.

The Washington Post looked at the 90 most visited porn sites based on UK visitor data from Similarweb. Of the 90 total sites, 14 hadn’t yet deployed ‘scan your face’ age checks. The publication found that while traffic from British IP addresses to sites requiring age verification had cratered, the 14 sites without age checks “have been rewarded with a flood of traffic” from UK-based users.

It’s worth noting that VPN usage might distort the the location data of users. Still, such a surge of traffic likely brings with it a surge in income in the form of ad-revenue. Ofcom, the UK’s government-approved regulatory communications office overseeing everything from TV to the internet, may have something to say about that though. Meanwhile, sites that comply with the rules are not only losing out on ad-revenue, but are also expected to pay for the legally required age verification services on top.

[…]

Alright, stop snickering about the mental image of someone perusing porn sites professionally, and let me tell you why this is important. You may have already read that while a lot of Brits support the age verification measures broadly speaking, a sizable portion feels they’ve been implemented poorly. Indeed, a lot of the aforementioned sites that complied with the law also criticised it by linking to a petition seeking its repeal. The UK government has responded to this petition by saying it has “no plans to repeal the Online Safety Act” despite, at time of writing, over 500,000 signatures urging it to do just that.

[…]

Source: Age verification legislation is tanking traffic to sites that comply, and rewarding those that don’t | PC Gamer

Of course age verification isn’t just hitting porn sites. It is also hitting LGBTQ+ sites, public health forums, conflict reporting and global journalism and more.

And there is no way to do Age Verification privately.

Europol wants to keep all data forever for law  enforcement, says unnamed(!) official. E.U. Court of Human Rights backed encryption as basic to privacy rights in 2024 and now Big Brother Chat Control is on the agenda again (EU consultation feedback link at end)

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

In the Russian case, the users relied on Telegram’s optional “secret chat” functions, which are also end-to-end encrypted. Telegram had refused to break into chats of a handful of users, telling a Moscow court that it would have to install a back door that would work against everyone. It lost in Russian courts but did not comply, leaving it subject to a ban that has yet to be enforced.
The European court backed the Russian users, finding that law enforcement having such blanket access “impairs the very essence of the right to respect for private life” and therefore would violate Article 8 of the European Convention, which enshrines the right to privacy except when it conflicts with laws established “in the interests of national security, public safety or the economic well-being of the country.”
The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”
In addition to prior cases, the judges cited work by the U.N. human rights commissioner, who came out strongly against encryption bans in 2022, saying that “the impact of most encryption restrictions on the right to privacy and associated rights are disproportionate, often affecting not only the targeted individuals but the general population.”
High Commissioner Volker Türk said he welcomed the ruling, which he promoted during a recent visit to tech companies in Silicon Valley. Türk told The Washington Post that“encryption is a key enabler of privacy and security online and is essential for safeguarding rights, including the rights to freedom of opinion and expression, freedom of association and peaceful assembly, security, health and nondiscrimination.”
[…]
Even as the fight over encryption continues in Europe, police officials there have talked about overriding end-to-end encryption to collect evidence of crimes other than child sexual abuse — or any crime at all, according to an investigative report by the Balkan Investigative Reporting Network, a consortium of journalists in Southern and Eastern Europe.
“All data is useful and should be passed on to law enforcement, there should be no filtering … because even an innocent image might contain information that could at some point be useful to law enforcement,” an unnamed Europol police official said in 2022 meeting minutes released under a freedom of information request by the consortium.

Source: E.U. Court of Human Rights backs encryption as basic to privacy rights – The Washington Post

An ‘unnamed’ Europol police official is peak irony in this context.

Remember to leave your feedback where you can, in this case: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14680-Impact-assessment-on-retention-of-data-by-service-providers-for-criminal-proceedings-/public-consultation_en

The EU wants to know what you think about it keeping all your data for *cough* crime stuff.

The EU wants to save all your data, or as much as possible for as long as possible. To insult the victims of crime, they say that they want to do this to fight crime. How do you feel about the EU being turned into a surveillance society? Leave your voice in the link below.

Source: Data retention by service providers for criminal proceedings – impact assessment

Croatians suddenly realise that EU CSAM rules include hidden pervasive chat control surveillance, turning the EU into Big Brother – dissaprove massively.

“The Prime Minister of the Republic of Croatia Andrej Plenkovic, at yesterday’s press conference, accused the opposition of upheld the proposal of a regulation of the European Parliament and the Council on the establishment of rules for the prevention and combating sexual abuse of children COM (2022) 209, which is (unpopularly referred to as ‘chat control’ because, in the case of the adoption of the proposal in its integral form, it would allow the bodies of criminal prosecution to be subject to the legal prosecution of the private communication of all citizens.

[…]

On June 17, the Bosnian MP, as well as colleagues from the SDP, HDZ and the vast majority of other European MPs supported the Proposal for Amendments to the Directive on combating the sexual abuse and sexual exploitation of children and child pornography from 2011. Although both legislative documents were adopted within the same package of EU strategies for a more effective fight against child abuse and have a similar name, two documents are intrinsically different – one is the regulation, the other directive, they have different rapporteurs and entered the procedure for as many as two years apart.”

‘We’ve already spoken about it’

“The basic difference, however, is that the proposal to amend the Directive does not contain any mention of ‘chat control’, i.e. the mass surveillance of citizens. MP Bosnian, as well as colleagues from the party We Can! They strongly oppose the proposal for a regulation that supports the monitoring of the content of private conversations of all citizens and which will only be voted on in the European Parliament. Such a proposal directly violates Article 7. The Charter of Fundamental Rights of the European Union, as confirmed by the Court of Justice of the European Union in the ruling “Schrems I” (paragraph 94), and the same position was confirmed by the Legal Service of the Council of the EU.

In the previous European Parliament, the Greens resisted mass surveillance, focusing on monitoring suspicious users – the security services must first identify suspicious users and then monitor them, not the other way around. People who abuse the internet to commit criminal acts must be recognized and isolated by the numerous services for whom it is a job, but not in a way of mass, but focused surveillance of individuals.

We all have the right to privacy, because privacy must remain a secure space for our human identity. Finally, the representative of Bosanac invites Prime Minister Plenković to oppose this harmful proposal at the European Council and protect the right to privacy of Croatian citizens,” Gordan Bosanca’s office said in a statement.

Source: Bosnian accuses Plenkovic of lying: ‘I urge him to counter that proposal’

Parliamentary questions are being asked as well

A review conducted under the Danish Presidency examining the proposal for a regulation on combatting online child sexual abuse material – dubbed the ‘Chat Control’ or CSAM regulation – has raised new, grave concerns about the respect of fundamental rights in the EU.

As it stands, the proposal envisages mass scanning of private communications, including encrypted conversations, raising serious issues of compliance with Article 7 of the Charter of Fundamental Rights by threatening to undermine the data security of citizens, businesses and institutions. A mandatory weakening of end-to-end encryption would create security gaps open to exploitation by cybercriminals, rival states and terrorist organisations, and would also harm the competitiveness of our digital economy.

At the same time, the proposed technical approach is based on automated content analysis tools which produce high rates of false positives, creating the risk that innocent users could be wrongly incriminated, while the effectiveness of this approach in protecting children has not been proven. Parliament and the Council have repeatedly rejected mass surveillance.

  • 1.Considering the mandatory scanning of all private communications, is the proposed regulation compatible with Article 7 of the Charter of Fundamental Rights?

  • 2.How will it ensure that child protection is achieved through targeted measures that are proven to be effective, without violating the fundamental rights of all citizens?

  • 3.How does it intend to prevent the negative impact on cybersecurity and economic competitiveness caused by weakening encryption?

Source: Proposed Chat Control law presents new blow for privacy

The Threat Of Extreme Statutory Damages For Copyright Almost Certainly Made Anthropic Settle With Authors: Not the Use of Books for training, but the idiots used pirated books for training

In what may be the least surprising news in the world of copyright and the internet, Anthropic just agreed to settle the copyright lawsuit that everyone’s been watching, but not for the reasons most people think. This isn’t about AI training being found to infringe copyright—in fact, Anthropic won on that issue. Instead, it’s about how copyright’s broken statutory damages system can turn a narrow legal loss into a company-ending threat, forcing settlements even when the core dispute goes your way.

Anthropic had done something remarkably stupid beyond just training: they downloaded unauthorized copies of works and stored them in an internal “pirate library” for future reference. Judge Alsup was crystal clear that while the training itself was fair use, building and maintaining this library of unauthorized copies was straightforward infringement. This wasn’t some edge case—it was basic copyright violation that Anthropic should have known better than to engage in.

And while there were some defenses to this, it would likely be tough to succeed at trial with the position Judge Alsup had put them in.

The question then was about liability. Because of copyright’s absolutely ridiculous statutory damages (up to $150k per work if the infringement was found to be “willful”), which need not bear any relationship to the actual damages, Anthropic could have been on the hook for trillions of dollars in damages just in this one case. That’s not something any company is going to roll the dice on, and I’m sure that the conversation was more or less: if you win and we get hit with statutory damages, the company will shut down and you will get nothing. Instead, let’s come to some sort of deal and get the lawyers (and the named author plaintiffs) paid.

While the amount of the settlement hasn’t been revealed yet, the amount authors get paid is going to come out eventually, and… I guarantee that it will not be much.

[…]

Instead what will happen—what always happens with these collective licensing deals—is that a few of the bigger names will get wealthy, but mainly the middleman will get wealthy. These kinds of schemes only tend to enrich the middlemen (often leading to corruption).

So this result is hardly surprising. Anthropic had to settle rather than face shutting down. But my guess is that authors are going to be incredibly disappointed by how much they end up getting from the settlement. Judge Alsup still has to approve the settlement, and some people may protest it, but it would be a much bigger surprise if he somehow rejects it.

Source: The Threat Of Extreme Statutory Damages For Copyright Almost Certainly Made Anthropic Settle With Authors | Techdirt

Developer Unlocks Suddenly Paywalled Echelon Exercise Bikes But Thinks DMCA says He Can’t Legally Release His Software

An app developer has jailbroken Echelon exercise bikes to restore functionality that the company put behind a paywall last month, but copyright laws prevent him from being allowed to legally release it.

Last month, Peloton competitor Echelon pushed a firmware update to its exercise equipment that forces its machines to connect to the company’s servers in order to work properly. Echelon was popular in part because it was possible to connect Echelon bikes, treadmills, and rowing machines to free or cheap third-party apps and collect information like pedaling power, distance traveled, and other basic functionality that one might want from a piece of exercise equipment. With the new firmware update, the machines work only with constant internet access and getting anything beyond extremely basic functionality requires an Echelon subscription, which can cost hundreds of dollars a year.

[…]

App engineer Ricky Witherspoon, who makes an app called SyncSpin that used to work with Echelon bikes, told 404 Media that he successfully restored offline functionality to Echelon equipment and won the Fulu Foundation bounty. But he and the foundation said that he cannot open source or release it because doing so would run afoul of Section 1201 of the Digital Millennium Copyright Act, the wide-ranging copyright law that in part governs reverse engineering. There are various exemptions to Section 1201, but most of them allow for jailbreaks like the one Witherspoon developed to only be used for personal use.

“It’s like picking a lock, and it’s a lock that I own in my own house. I bought this bike, it was unlocked when I bought it, why can’t I distribute this to people who don’t have the technical expertise I do?” Witherspoon told 404 Media. “It would be one thing if they sold the bike with this limitation up front, but that’s not the case. They reached into my house and forced this update on me without users knowing. It’s just really unfortunate.”

[…]

“A lot of people chose Echelon’s ecosystem because they didn’t want to be locked into using Echelon’s app. There was this third-party ecosystem. That was their draw to the bike in the first place,” O’Reilly said. “But now, if the manufacturer can come in and push a firmware update that requires you to pay for subscription features that you used to have on a device you bought in the first place, well, you don’t really own it.”

“I think this is part of the broader trend of enshittification, right?,” O’Reilly added. “Consumers are feeling this across the board, whether it’s devices we bought or apps we use—it’s clear that what we thought we were getting is not continuing to be provided to us.”

Witherspoon says that, basically, Echelon added an authentication layer to its products, where the piece of exercise equipment checks to make sure that it is online and connected to Echelon’s servers before it begins to send information from the equipment to an app over Bluetooth. “There’s this precondition where the bike offers an authentication challenge before it will stream those values. It is like a true digital lock,” he said. “Once you give the bike the key, it works like it used to. I had to insert this [authentication layer] into the code of my app, and now it works.”

[…]

Witherspoon has now essentially restored functionality that he used to have to his own bike, which he said he bought in the first place because of its ability to work offline and its ability to connect to third-party apps. But others will only be able to do it if they design similar software, or if they never update the bike’s firmware. Witherspoon said that he made the old version of his SyncSpin app free and has plastered it with a warning urging people to not open the official Echelon app, because it will update the firmware on their equipment and will break functionality. Roberto Viola, the developer of a popular third-party exercise app called QZ, wrote extensively about how Echelon has broken his popular app: “Without warning, Echelon pushed a firmware update. It didn’t just upgrade features—it locked down the entire device. From now on, bikes, treadmills, and rowers must connect to Echelon’s servers just to boot,” he wrote. “No internet? No workout. Even basic offline usage is impossible.

[…]

Witherspoon told me that he is willing to talk to other developers about how he did this, but that he is not willing to release the jailbreak on his own: “I don’t feel like going down a legal rabbit hole, so for now it’s just about spreading awareness that this is possible, and that there’s another example of egregious behavior from a company like this […] if one day releasing this was made legal, I would absolutely open source this. I can legally talk about how I did this to a certain degree, and if someone else wants to do this, they can open source it if they want to.”

Source: Developer Unlocks Newly Enshittified Echelon Exercise Bikes But Can’t Legally Release His Software

I do not think that this is the way the DMCA works, but if it is, it needs some serious revision.

Google wants to verify all developers’ identities, including those not on the play store in massive data grab

  • Google will soon verify the identities of developers who distribute Android apps outside the Play Store.
  • Developers must submit their information to a new Android Developer Console, increasing their accountability for their apps.
  • Rolling out in phases from September 2026, these new verification requirements are aimed at protecting users from malware by making it harder for malicious developers to remain anonymous.

 

Most Android users acquire apps from the Google Play Store, but a small number of users download apps from outside of it, a process known as sideloading. There are some nifty tools that aren’t available on the Play Store because their developers don’t want to deal with Google’s approval or verification requirements. This is understandable for hobbyist developers who simply want to share something cool or useful without the burden of shedding their anonymity or committing to user support.

[…]

Today, Google announced it is introducing a new “developer verification requirement” for all apps installed on Android devices, regardless of source. The company wants to verify the identity of all developers who distribute apps on Android, even if those apps aren’t on the Play Store. According to Google, this adds a “crucial layer of accountability to the ecosystem” and is designed to “protect users from malware and financial fraud.” Only users with “certified” Android devices — meaning those that ship with the Play Store, Play Services, and other Google Mobile Services (GMS) apps — will block apps from unverified developers from being installed.

Google says it will only verify the identity of developers, not check the contents of their apps or their origin. However, it’s worth noting that Google Play Protect, the malware scanning service integrated into the Play Store, already scans all installed apps regardless of where they came from. Thus, the new requirement doesn’t prevent malicious apps from reaching users, but it does make it harder for their developers to remain anonymous. Google likens this new requirement to ID checks at the airport, which verify the identity of travelers but not whether they’re carrying anything dangerous.

[…]

Source: Google wants to make sideloading Android apps safer by verifying developers’ identities – Android Authority

So the new requirement doesn’t make things any safer, but gives Google a whole load of new personal data for no good reason other than that they want it. I guess it’s becoming more and more time to de-Google.

4chan will refuse to pay daily UK fines, its lawyer tells BBC

A lawyer representing the online message board 4chan says it won’t pay a proposed fine by the UK’s media regulator as it enforces the Online Safety Act.

According to Preston Byrne, managing partner of law firm Byrne & Storm, Ofcom has provisionally decided to impose a £20,000 fine “with daily penalties thereafter” for as long as the site fails to comply with its request.

“Ofcom’s notices create no legal obligations in the United States,” he told the BBC, adding he believed the regulator’s investigation was part of an “illegal campaign of harassment” against US tech firms.

Ofcom has declined to comment while its investigation continues.

“4chan has broken no laws in the United States – my client will not pay any penalty,” Mr Byrne said.

[…]

In a statement posted on X, law firms Byrne & Storm and Coleman Law said 4chan was a US company incorporated in the US, and therefore protected against the UK law.

“American businesses do not surrender their First Amendment rights because a foreign bureaucrat sends them an email,” they wrote.

“Under settled principles of US law, American courts will not enforce foreign penal fines or censorship codes.

“If necessary, we will seek appropriate relief in US federal court to confirm these principles.”

[…]

Ofcom has previously said the Online Safety Act only requires services to take action to protect users based in the UK.

[…]

If 4chan does successfully fight the fine in the US courts, Ofcom may have other options.

“Enforcing against an offshore provider is tricky,” Emma Drake, partner of online safety and privacy at law firm Bird and Bird, told the BBC.

“Ofcom can instead ask a court to order other services to disrupt a provider’s UK business, such as requiring a service’s removal from search results or blocking of UK payments.

“If Ofcom doesn’t think this will be enough to prevent significant harm, it can even ask that ISPs be ordered to block UK access.”

Source: 4chan will refuse to pay daily UK fines, its lawyer tells BBC

Welcome to the world of censorship.

Uni of Melbourne used Wi-Fi location data to ID protestors

Australia’s University of Melbourne last year used Wi-Fi location data to identify student protestors.

The University used Wi-Fi to identify students who participated in July 2024 sit-in protest. As described in a report [PDF] into the matter by the state of Victoria’s Office of the Information Commissioner, the University directed protestors to leave the building they occupied and warned those who remained could be suspended, disciplined, or reported to police.

The report says 22 chose to remain, and that the University used CCTV and WiFi location data to identify them.

The Information Commissioner found that use of CCTV to identify protestors did not breach privacy, but felt using Wi-Fi location data did because the University’s policies lacked detail.

“Given that individuals would not have been aware of why their Wi-Fi location data was collected and how it may be used, they could not exercise an informed choice as to whether to use the Wi-Fi network during the sit-in, and be aware of the possible consequences for doing so,” the report found.

As the investigation into use of location data unfolded, the University changed its policies regarding use of location data. The Office of the Information Commissioner therefore decided not to issue a formal compliance notice, and will monitor the University to ensure it complies with its undertakings.

Source: Australian uni used Wi-Fi location data to ID protestors • The Register

Privacy‑Preserving Age Verification Falls Apart On Contact With Reality

[…] Identity‑proofing creates a privacy bottleneck. Somewhere, an identity provider must verify you. Even if it later mints an unlinkable token, that provider is the weak link—and in regulated systems it will not be allowed to “just delete” your information. As Bellovin puts it:

Regulation implies the ability for governments to audit the regulated entities’ behavior. That in turn implies that logs must be kept. It is likely that such logs would include user names, addresses, ages, and forms of credentials presented.

Then there’s the issue of fraud and duplication of credentials. Accepting multiple credential types increases coverage and increases abuse; people can and do hold multiple valid IDs:

The fact that multiple forms of ID are acceptable… exacerbates the fraud issue…This makes it impossible to prevent a single person from obtaining multiple primary credentials, including ones for use by underage individuals.

Cost and access will absolutely chill speech. Identity providers are expensive. If users pay, you’ve built a wealth test for lawful speech. If sites pay, the costs roll downhill (fees, ads, data‑for‑access) and coverage narrows to the cheapest providers who may also be more susceptible to breaches:

Operating an IDP is likely to be expensive… If web sites shoulder the cost, they will have to recover it from their users. That would imply higher access charges, more ads (with their own privacy challenges), or both.

Sharing credentials drives mission creep, which will create dangers with the technology. If a token proves only “over 18,” people will share it (parents to kids, friends to friends). To deter that, providers tie tokens to identities/devices or bundle more attributes—making them more linkable and more revocable:

If the only use of the primary credential is obtaining age-verifying subcredentials, this isn’t much of a deterrent—many people simply won’t care…That, however, creates pressure for mission creep… , including opening bank accounts, employment verification, and vaccination certificates; however, this is also a major point of social control, since it is possible to revoke a primary credential and with it all derived subcredentials.

The end result, then is you’re not just attacking privacy again, but you’re creating a tool for authoritarian pressure:

Those who are disfavored by authoritarian governments may lose access not just to pornography, but to social media and all of these other services.

He also grounds it in lived reality, with a case study that shows who gets locked out first:

Consider a hypothetical person “Chris”, a non-driving senior citizen living with an adult child in a rural area of the U.S… Apart from the expense— quite possibly non-trivial for a poor family—Chris must persuade their child to then drive them 80 kilometers or more to a motor vehicles office…

There is also the social aspect. Imagine the embarrassment to all of an older parent having to explain to their child that they wish to view pornography.

None of this is an attack on the math. It’s a reminder that deployment reality ruins the cryptographic ideal. There’s more in the paper, but you get the idea

[…]

Source: Privacy‑Preserving Age Verification Falls Apart On Contact With Reality | Techdirt

Proton releases Lumo GPT 1.1:  faster, more advanced, European and actually private

Today we’re releasing a powerful update to Lumo that gives you a more capable privacy-first AI assistant offering faster, more thorough answers with improved awareness of recent events.

Guided by feedback from our community, we’ve been busy upgrading our models and adding GPUs, which we’ll continue to do thanks to the support of our Lumo Plus subscribers. Lumo 1.1 performs significantly better across the board than the first version of Lumo, so you can now use it more effectively for a variety of use cases:

  • Get help planning projects that require multiple steps — it will break down larger goals into smaller tasks
  • Ask complex questions and get more nuanced answers
  • Generate better code — Lumo is better at understanding your requests
  • Research current events or niche topics with better accuracy and fewer hallucinations thanks to improved web search

New cat, new tricks, same privacy

The latest upgrade brings more accurate responses with significantly less need for corrections or follow-up questions. Lumo now handles complex requests much more reliably and delivers the precise results you’re looking for.

In testing, Lumo’s performance has increased across several metrics:

  • Context: 170% improvement in context understanding so it can accurately answer questions based on your documents and data
  • Coding: 40% better ability to understand requests and generate correct code
  • Reasoning: Over 200% improvement in planning tasks, choosing the right tools such as web search, and working through complex multi-step problems

Most importantly, Lumo does all of this while respecting the confidentiality of your chats. Unlike every major AI platform, Lumo is open source and built to be private by design. It doesn’t keep any record of your chats, and your conversation history is secured with zero-access encryption so nobody else can see it and your data is never used to train the models. Lumo is the only AI where your conversations are actually private.

Learn about Lumo privacy

Lumo mobile apps are now open source

Unlike Big Tech AIs that spy on you, Lumo is an open source application that exclusively runs open source models. Open source is especially important in AI because it confirms that the applications and models are not being used nefariously to manipulate responses to fit a political narrative or secretly leak data. While the Lumo web client is already open source(new window), today we are also releasing the code for the mobile apps(new window). In line with Lumo being the most transparent and private AI, we have also published the Lumo security model so you can see how Lumo’s zero access encryption works and why nobody, not even Proton can access your conversation history.

Source: Introducing Lumo 1.1 for faster, advanced reasoning | Proton

The EU could be scanning your chats by October 2025 with Chat Control

Denmark kicked off its EU Presidency on July 1, 2025, and, among its first actions, lawmakers swiftly reintroduced the controversial child sexual abuse (CSAM) scanning bill to the top of the agenda.

Having been deemed by critics as Chat Control, the bill aims to introduce new obligations for all messaging services operating in Europe to scan users’ chats, even if they’re encrypted.

The proposal, however, has been failing to attract the needed majority since May 2022, with Poland’s Presidency being the last to give up on such a plan.

Denmark is a strong supporter of Chat Control. Now, the new rules could be adopted as early as October 14, 2025, if the Danish Presidency manages to find a middle ground among the countries’ members.

Crucially, according to the latest data leaked by the former MEP for the German Pirate Party, Patrick Breyer, many countries that said no to Chat Control in 2024 are now undecided, “even though the 2025 plan is even more extreme,” he added.

[…]

As per its first version, all messaging software providers would be required to perform indiscriminate scanning of private messages to look for CSAM – so-called ‘client-side scanning‘. The proposal was met with a strong backlash, and the European Court of Human Rights ended up banning all legal efforts to weaken encryption of secure communications in Europe.

In June 2024, Belgium then proposed a new text to target only shared photos, videos, and URLs, upon users’ permission. This version didn’t satisfy either the industry or voting EU members due to its coercive nature. As per the Belgian text, users must give consent to the shared material being scanned before being encrypted to keep using the functionality.

Source: The EU could be scanning your chats by October 2025 – here’s everything we know | TechRadar