Yesterday, I wrote about how YouTube is now using AI to guess your age. The idea is this: Rather than rely on the age attached to your account, YouTube analyzes your activity on its platform, and makes a determination based on how your activity corresponds to others users. If the AI thinks you’re an adult, you can continue on; if it thinks your behavior aligns with that of a teenage user, it’ll put restrictions and protections on your account.
Now, Google is expanding its AI age verification tools beyond just its video streaming platform, to other Google products as well. As with YouTube, Google is trialing this initial rollout with a small pool of users, and based on its results, will expand the test to more users down the line. But over the next few weeks, your Google Account may be subject to this new AI, whose only goal is to estimate how old you are.
That AI is trained to look for patterns of behavior across Google products associated with users under the age of 18. That includes the categories of information you might be searching for, or the types of videos you watch on YouTube. Google’s a little cagey on the details, but suffice it to say that the AI is likely snooping through most, if not all, of what you use Google and its products for.
Restrictions and protections on teen Google accounts
We do know some of the restrictions and protections Google plans to implement when it detects a user is under 18 years old. As I reported yesterday, that involves turning on YouTube’s Digital Wellbeing tools, such as reminders to stop watching videos, and, if it’s late, encouragements to go to bed. YouTube will also limit repetitive views of certain types of content.
In addition to these changes to YouTube, you’ll also find you can no longer access Timeline in Maps. Timeline saves your Google Maps history, so you can effectively travel back through time and see where you’ve been. It’s a cool feature, but Google restricts access to users 18 years of age or older. So, if the AI detects you’re underage, no Timeline for you.
Denmark, Greece, Spain, France, and Italy are the first to test the technical solution unveiled by the European Commission on July 14, 2025.
The announcement came less than two weeks before the UK enforced mandatory age verification checks on July 25. These have so far sparked concerns about the privacy and security of British users, fueling a spike in usage amongst the best VPN apps.
[…]
The introduction of this technical solution is a key step in implementing children’s online safety rules under the Digital Services Act (DSA).
Lawmakers ensure that this solution seeks to set “a new benchmark for privacy protection” in age verification.
That’s because online services will only receive proof that the user is 18+, without any personal details attached.
Further work on the integration of zero-knowledge proofs is also ongoing, with the full implementation of mandatory checks in the EU expected to be enforced in 2026.
[…]
Starting from Friday, July 25, millions of Britons will need to be ready to prove their age before accessing certain websites or content.
Under the Online Safety Act, sites displaying adult-only content must prevent minors from accessing their services via robust age checks.
Social media, dating apps, and gaming platforms are also expected to verify their users’ age before showing them so-called harmful content.
[…]
The vagueness of what constitutes harmful content, as well as the privacy and security risks linked with some of these age verification methods, have attracted criticism among experts, politicians, and privacy-conscious citizens who fear a negative impact on people’s digital rights.
While the EU approach seems better on paper, it remains to be seen how the age verification scheme will ultimately be enforced.
And so comes the EU spying on our browsing habits, telling us what is and isn’t good for us to see. I can make my own mind up, thank you. How annoying that I will be rate limited to the VPN I get.
[…] Martinez is part of a growing backlash to Steam and Itch.io purging thousands of games from their databases at the behest of payment processing companies. Australia-based anti-porn group Collective Shout claimed credit for the new wave of censorship after inciting a write-in campaign against Visa and Mastercard, which it accused of profiting off “rape, incest, and child sexual abuse game sales.” Some fans of gaming are now mounting reverse campaigns in the hopes of nudging Visa and Mastercard in the opposite directions.
Screenshot: Bluesky / Kotaku
“Seeing the rise of censorship and claiming it’s to ‘protect kids,’ it sounds almost like the Satanic Panic, targeting people that have done nothing to anyone except having fun,” Martinez told Kotaku. “We’re already seeing the negative effect this has on people’s personal and financial lives because of such unnecessary restrictions. If parents are so concerned over protecting kids, then they should parent their own kids instead of forcing other people to meet their ridiculous demands.”
Indie horror game Vile: Exhumedis one of the titles that’s been banned from Steam by Valve. Released last year by Cara Cadaver of Final Girl Games, it has players rummage through a fictional ‘90s computer terminal to uncover a twisted man’s toxic obsession with an adult horror film actress, using this format to engage with themes of online misogyny and toxic parasocial relationships. “It was banned for ‘sexual content with depictions of real people,’ which, if you have played it, you know is all implied, making this all feel even worse,” Cadaver wrote on Bluesky on July 28.
Valve did not immediately respond to a request for comment.
Vile: Exhumed is a textbook example of what critics of the sex game purge always feared: that guidelines aimed at clamping down on pornographic games believed to be encouraging or glorifying sexual violence would inevitably ensnare serious works of art grappling with difficult and uncomfortable subject matter in important ways. Who gets to decide which is which? For a long time, it appeared to be Steam and Itch.io. Last week’s purges revealed it’s actually Visa and Mastercard, and whoever can frighten them the most with bad publicity.
VILE: Exhumed | Announcement Trailer
“Things are definitely changing as reports of responses to calls have gone from ‘Sorry what are you talking about?’ to then ‘Are you ALSO calling about itch/steam’ to now some [callers] receiving outright harassment,” a 2D artist who goes by Void and who has helped organize a Discord for a reverse call-in campaign told Kotaku. It’s hard to have any clear sense of the scope of these counter-initiatives or what ultimate impact they might have on the companies in question, but anecdotally the effort seems to be gaining traction. For instance, callers are now needing to spend less time explaining what Steam, Itch.io, or “NSFW” games are to the people on the other end of the line.
“For calls I was originally focusing on Mastercard, but I ended up getting a lot of time out of Visa,” Bluesky user RJAIN told Kotaku. “Two days ago I had a call with Visa that lasted over an hour, and a follow-up call later on that lasted over 2.5 hours. Those calls, I spoke with a supervisor and they seemed very calm and understanding. Yesterday, the calls were different. The reps seemed angry and exhausted. They refused to let me speak to a supervisor and kept insisting that it is now protocol for them to disconnect the call on anyone complaining about this issue.”
[…]
Some industry trade groups have also weighed in. The International Game Developers Association (IGDA) released a statement stating that “censorship like this is materially harmful to game developers” and urging a dialogue between “platforms, payment processors, and industry leaders with developers and advocacy groups.” “We welcome collaboration and transparency,” it wrote. “This issue is not just about adult content. It is about developer rights, artistic freedom, and the sustainability of diverse creative work in games.”
For the time being, that dialogue appears to mostly be taking place at Visa’s and Mastercard’s call centers, at least when they allow it.
[…] It seems like a simple concept that everyone should be able to agree to: if I buy a product from you that does x, y, and z, you don’t get to remove x, y, or z remotely after I’ve made that purchase. How we’ve gotten to a place where companies can simply remove, or paywall, product features without recourse for the customer they essentially bait and switched is beyond me.
But it keeps happening. The most recent example of this is with Echelon exercise bikes. Those bikes previously shipped to paying customers with all kinds of features for ride metrics and connections to third-party apps and services without anything further needed from the user. That all changed recently when a firmware update suddenly forced an internet connection and a subscription to a paid app to make any of that work.
As explained in a Tuesday blog post by Roberto Viola, who develops the “QZ (qdomyos-zwift)” app that connects Echelon machines to third-party fitness platforms, like Peloton, Strava, and Apple HealthKit, the firmware update forces Echelon machines to connect to Echelon’s servers in order to work properly. A user online reported that as a result of updating his machine, it is no longer syncing with apps like QZ, and he is unable to view his machine’s exercise metrics in the Echelon app without an Internet connection.
Affected Echelon machines reportedly only have full functionality, including the ability to share real-time metrics, if a user has the Echelon app active and if the machine is able to reach Echelon’s servers.
Want to know how fast you’re going on the bike you’re sitting upon? That requires an internet connection. Want to get a sense of how you performed on your ride on the bike? That requires an internet connection. And if Echelon were to go out of business? Then your bike just no longer works beyond the basic function of pedaling it.
And the ability to use third-party apps is reportedly just, well, gone.
For some owners of Echelon equipment, QZ, which is currently rated as the No. 9 sports app on Apple’s App Store, has been central to their workouts. QZ connects the equipment to platforms like Zwift, which shows people virtual, scenic worlds while they’re exercising. It has also enabled new features for some machines, like automatic resistance adjustments. Because of this, Viola argued in his blog that QZ has “helped companies grow.”
“A large reason I got the [E]chelon was because of your app and I have put thousands of miles on the bike since 2021,” a Reddit user told the developer on the social media platform on Wednesday.
Instead of happily accepting that someone out there is making its product more attractive and valuable, Echelon is instead going for some combination of overt control and the desire for customer data. Data which will be used, of course, for marketing purposes.
There’s also value in customer data. Getting more customers to exercise with its app means Echelon may gather more data for things like feature development and marketing.
What you won’t hear anywhere, at least that I can find, is any discussion of the ability to return or get refunds for customers who bought these bikes when they did things that they no longer will do after the fact. That’s about as clear a bait and switch type of a scenario as you’re likely to find.
Unfortunately, with the FTC’s Bureau of Consumer Protection being run by just another Federalist Society imp, it’s unlikely that anything material will be done to stop this sort of thing.
In the wake of storefronts like Steam and itch.io curbing the sale of adult games, irate fans have started an organized campaign against the payment processors that they believe are responsible for the crackdown. While the movement is still in its early stages, people are mobilizing with an eye toward overwhelming communication lines at companies like Visa and Mastercard in a way that will make the concern impossible to ignore.
On social media sites like Reddit and Bluesky, people are urging one another to get into contact with Visa and Mastercard through emails and phone calls. Visa and Mastercard have become the targets of interest because the affected storefronts both say that their decisions around adult games were motivated by the danger of losing the ability to use major payment processors while selling games. These payment processors have their own rules regarding usage, but they are vaguely defined. But losing infrastructure like this could impact audiences well beyond those who care about sex games, spokespeople for Valve and itch.io said.
In a now-deleted post on the Steam subreddit with over 17,000 upvotes, commenters say that customer service representatives for both payment processors seem to already be aware of the problem. Sometimes, the representatives will say that they’ve gotten multiple calls on the subject of adult game censorship, but that they can’t really do anything about it.
The folks applying pressure know that someone at a call center has limited power in a scenario like this one; typically, agents are equipped to handle standard customer issues like payment fraud or credit card loss. But the point isn’t to enact change through a specific phone call: It’s to cause enough disruption that the ruckus theoretically starts costing payment processors money.
“Emails can be ignored, but a very very long queue making it near impossible for other clients to get in will help a lot as well,” reads the top comment on the Reddit thread. In that same thread, people say that they’re hanging onto the call even if the operator says that they’ll experience multi-hour wait times presumably caused by similar calls gunking up the lines. Beyond the stubbornness factor, the tactic is motivated by the knowledge that most customer service systems will put people who opt for call-backs in a lower priority queue, as anyone who opts in likely doesn’t have an emergency going on.
Image: OppaiMan
“Do both,” one commenter suggests. “Get the call back, to gum up the call back queue. Then call in again and wait to gum up the live queue.”
People are also using email to voice their concerns directly to the executives at both Visa and Mastercard, payment processors that activist group Collective Shout called out by name in their open letter requesting that adult games get pulled. Emails are also getting sent to customer service. In light of the coordinated effort, many people are getting a pre-written response that reads:
Thank you for reaching out and sharing your perspective. As a global company, we follow the laws and regulations everywhere we do business. While we explicitly prohibit illegal activity on our network, we are equally committed to protecting legal commerce. If a transaction is legal, our policy is to process the transaction. We do not make moral judgments on legal purchases made by consumers. Visa does not moderate content sold by merchants, nor do we have visibility into the specific goods or services sold when we process a transaction. When a legally operating merchant faces an elevated risk of illegal activity, we require enhanced safeguards for the banks supporting those merchants. For more information on Visa’s policies, please visit our network integrity page on Visa.com. Thank you for writing.
On platforms like Bluesky, resources are being shared to help people know who to contact and how, including possible scripts for talking to representatives or sending emails. A website has been set up with the explicit purpose of arming concerned onlookers with the tools and knowledge necessary to do their part in the campaign.
Through it all, gamers are telling one another to remain cordial during any interactions with payment processors, especially when dealing with low-level workers who are just trying to do their job. For executives, the purpose of maintaining a considerate tone is to help the people in power take the issue seriously.
The strategy is impressive in its depth and breadth of execution. While some charge in with an activist bent, others say that they’re pretending to be confused customers who want to know why they can’t use Visa or Mastercard to buy their favorite games.
Meanwhile, Collective Shout — the organization who originally complained to Steam, Visa, and Mastercard about adult games featuring non-consensual violence against women — has also recently put out a statement of its own alongside a timeline of events.
“We raised our objection to rape and incest games on Steam for months, and they ignored us for months,” reads a blog post from Collective Shout. “We approached payment processors because Steam did not respond to us.”
Collective Shout claims that it only petitioned itch.io to pull games with sexualized violence or torture against women, but allegedly, the storefront made its own decision to censor NSFW content sitewide. At current, itch.io has deindexed games with adult themes, meaning that these games are not viewable on their search pages. The indie storefront is still in the middle of figuring out and outlining its rules for adult content on the website, but the net has been cast so wide that some games with LGBT themes are being impacted as well.
In another popular Reddit thread, users say that customer service representatives are shifting from confusion to reiterating that their concerns are being “heard.”
“I will be calling them again in a few to days to see if there is any progress on changing the situation,” says the original poster.
Perhaps a different comment in that thread summarizes the ordeal best: “There’s really only 2 things that can unite Gamers: hate campaigns and gooning.”
As a fight with credit card companies over adult games leads to renewed concerns about censorship on Steam and even on indie platforms like itch.io, a recent warning by Nier: Automata director Yoko Taro calling censorship a “security hole that endangers democracy itself” has become relevant again.
The comments came last November when the Manga Library Z online repository for digital downloads of out-of-print manga was forced to shut down. The group blamed international credit card companies, presumably Visa and Mastercard, who wanted the site to censor certain words from its copies of adult manga.
“Publishing and similar fields have always faced regulations that go beyond the law, but the fact that a payment processor, which is involved in the entire infrastructure of content distribution, can do such things at its own discretion seems to me to be dangerous on a whole new level,” Taro wrote in a thread at the time, according to a translation by Automaton.
He contionued:
It implies that by controlling payment processing companies, you can even censor another country’s free speech. I feel like it’s not just a matter of censoring adult content or jeopardizing freedom of expression, but rather a security hole that endangers democracy itself.
Manga Library Z was eventually able to come back online thanks to a crowdfunding campaign earlier this year, but now video game developers behind adult games with controversial themes are facing similar issues on Steam and itch.io due to recent boycott campaigns. Some artists and fans have been organizing reverse boycotts calling for Visa, Mastercard, and others to end their “moral panic.” One such petition has nearly 100,000 signatures so far.
“Some of the games that have been caught up in the last day’s changes on Itch are games that up-and-coming creators have made about their own experiences in abusive relationships, or dealing with trauma, or coming out of the closet and finding their first romance as an LGBTQ person,” NYU Game Center chair Naomi Clark told 404 Media this week. She mentioned Jenny Jiao Hsia’s autobiographical Consume Me as one example of the type of work that could be censored under the platform’s shifting definitions of what’s acceptable
TL;DR – use a VPN or take a picture of yourself in Death Stranding
Earlier this week, the United Kingdom’s age assurance requirement for sites that publish pornographic material went into effect, which has resulted in everything from Pornhub to Reddit and Discord displaying an age verification panel when users attempt to visit. There’s just one little problem. As The Verge notes, all it takes to defeat the age-gating is a VPN, and those aren’t hard to come by these days.
Here’s the deal: Ofcom, the UK’s telecom regulator, requires online platforms to verify the age of their users if they are accessing a site that either publishes or allows users to publish pornographic material. Previously, a simple click of an “I am over 18” button would get you in. Now, platforms are mandated to use a verification method that is “strong” and “highly effective.” A few of those acceptable methods include verifying with a credit card, uploading a photo ID, or submitting to a “facial age estimation” in which you upload a selfie so a machine can determine if you look old enough to pleasure yourself responsibly.
Those options vary from annoying to creepily intrusive, but there’s a little hitch in the plan: Currently, most platforms are determining a user’s location based on IP address. If you have an IP that places you in the UK, you have to verify. But if you don’t, you’re free to browse without interruption. And all you need to change your IP address is a VPN.
Ofcom seems aware of this very simple workaround. According to the BBC, the regulator has rules that make it illegal for platforms to host, share, or allow content that encourages people to use a VPN to bypass the age authentication page. It also encouraged parents to block or control VPN usage by their children to keep them from dodging the age checkers.
It seems that people are aware of this option. Google Trends shows that searches for the term “VPN” have skyrocketed in the UK since the age verification requirement went into effect.
[…]
But the thing about Ofcom’s implementation here is that it’s not just blocking kids from seeing harmful material—it’s exposing everyone to invasive, privacy-violating risks. When the methods for accomplishing the stated goal require people to reveal sensitive data, including their financial information, or give up pictures of their face to be scanned and processed by AI, it’s kinda hard to blame anyone for just wanting to avoid that entirely. Whether they’re horny teens trying to skirt the system or adults, getting a face scan before opening Pornhub kinda kills the mood.
An X user named Dany Sterkhov appears to be the first to discover the hack. On July 25, he posted that he had bypassed Discord’s age verification check using the photo mode in the video game Death Stranding.
[…]
The Verge and PCGamer have both tried Sterkhov’s hack themselves and confirmed it works.
Most of these companies rely on third-party platforms to handle age verification. These services typically give users the option to upload a government-issued photo ID or submit photos of themselves.
Discord uses a platform called k-ID for age verification. According to The Verge’s Tom Warren, all he had to do to pass the check was point his phone’s camera at his monitor to scan the face of Sam Bridges, the protagonist of Death Stranding, using the game’s photo mode. The system did ask him to open and close his mouth—something that is easy enough to do in the game.
Warren was also able to bypass Reddit’s age check, which is handled by Persona, using the same method. However, the trick didn’t work with Bluesky’s system, which uses Yoti for age verification.
[…]
ProtonVPN reported on X that it saw an over 1,400 percent increase in sign-ups in the U.K. after the age verification requirements took effect. VPNs let people browse the web as if they were in a different location, making it easier to bypass the U.K.’s age checks.
In the U.S., laws requiring similar age verification systems for porn sites have passed in nearly half the states. Nine states in the U.S. have also passed laws requiring parental consent or age verification for social media platforms.
The problem is that besides being unenforceable you are leaving a lot of very personal data inside the age verifiers databases. These databases are clear targets and will get hacked.
The US Senate has granted the Internet Archive federal depository status, making it officially part of an 1,100-library network that gives the public access to government documents, KQED reported. The designation was made official in a letter from California Senator Alex Padilla to the Government Publishing Office that oversees the network. “The Archive’s digital-first approach makes it the perfect fit for a modern federal depository library, expanding access to federal government publications amid an increasingly digital landscape,” he wrote.
[…]
With its new status, the Internet Archive will be gain improved access to government materials, founder Brewster Kahle said in a statement. “By being part of the program itself, it just gets us closer to the source of where the materials are coming from, so that it’s more reliably delivered to the Internet Archive, to then be made available to the patrons of the Internet Archive or partner libraries.” The Archive could also help other libraries move toward digital preservation, given its experience in that area.
It’s some good news for the site which has faced legal battles of late. It was sued by major publishers over loans of digital books during the Coronavirus epidemic and was forced by a federal court in 2023 to remove more than half a million titles. And more recently, major music label filed lawsuits over its Great 78 Project that strove to preserve 78 RPM records. If it loses that case it could owe more than $700 million damages and possibly be forced to shut down.
The new designation likely won’t aid its legal problems, but it does affirm the site’s importance to the public. “In October, the Internet Archive will hit a milestone of 1 trillion pages,” Kahle wrote. “And that 1 trillion is not just a testament to what libraries are able to do, but actually the sharing that people and governments have to try and create an educated populace.”
Copilot Vision is an extension of Microsoft’s divisive Recall, a feature initially sort of exclusive to the Copilot+ systems with a neural co-processor of sufficient computational power. Like Recall, which was pulled due to serious security failings and subject to a lengthy delay before its eventual relaunch, Copilot Vision is designed to analyze everything you do on your computer.
It does this, when enabled, by capturing constant screenshots and feeding them to an optical character recognition system and a large language model for analysis – but where Recall works locally, Copilot Vision sends the data off to Microsoft servers.
According to a Microsoft spokesperson back in April, users’ data will not be stored long-term, aside from transcripts of the conversation with the Copilot assistant itself, and “are not used for model training or ads personalisation.”
[…]
While the screen snooping only happens when the user expressly activates it as part of a Copilot session, unlike Recall, which is constantly active in the background when enabled, it’s also designed to be more proactive than previous releases – which, for many readers, will conjure memories of Clippy and his cohort of animated assistants from the days of Microsoft Office 97 and onward.
At the time of writing, Microsoft was only offering Copilot Vision in the US, with the promise (or threat) that it will be coming to very specifically “non-European countries” soon – a tip of the hat, it seems, to the European Union’s AI Act.
Researchers in Italy have developed a way to create a biometric identifier for people based on the way the human body interferes with Wi-Fi signal propagation.
The scientists claim this identifier, a pattern derived from Wi-Fi Channel State Information, can re-identify a person in other locations most of the time when a Wi-Fi signal can be measured. Observers could therefore track a person as they pass through signals sent by different Wi-Fi networks – even if they’re not carrying a phone.
Following the approval of the IEEE 802.11bf specification in 2020, the Wi-Fi Alliance began promoting Wi-Fi Sensing, positioning Wi-Fi as something more than a data transit mechanism.
The researchers – Danilo Avola, Daniele Pannone, Dario Montagnini, and Emad Emam, from La Sapienza University of Rome – call their approach “WhoFi”, as described in a preprint paper titled, “WhoFi: Deep Person Re-Identification via Wi-Fi Channel Signal Encoding.”
(The authors presumably didn’t bother checking whether the WhoFi name was taken. But an Oklahoma-based provider of online community spaces shares the same name.)
Who are you, really?
Re-identification, the researchers explain, is a common challenge in video surveillance. It’s not always clear when a subject captured on video is the same person recorded at another time and/or place.
Re-identification doesn’t necessarily reveal a person’s identity. Instead, it is just an assertion that the same surveilled subject appears in different settings. In video surveillance, this might be done by matching the subject’s clothes or other distinct features in different recordings. But that’s not always possible.
The Sapienza computer scientists say Wi-Fi signals offer superior surveillance potential compared to cameras because they’re not affected by light conditions, can penetrate walls and other obstacles, and they’re more privacy-preserving than visual images.
“The core insight is that as a Wi-Fi signal propagates through an environment, its waveform is altered by the presence and physical characteristics of objects and people along its path,” the authors state in their paper. “These alterations, captured in the form of Channel State Information (CSI), contain rich biometric information.”
CSI in the context of Wi-Fi devices refers to information about the amplitude and phase of electromagnetic transmissions. These measurements, the researchers say, interact with the human body in a way that results in person-specific distortions. When processed by a deep neural network, the result is a unique data signature.
Researchers proposed a similar technique, dubbed EyeFi, in 2020, and asserted it was accurate about 75 percent of the time.
The Rome-based researchers who proposed WhoFi claim their technique makes accurate matches on the public NTU-Fi dataset up to 95.5 percent of the time when the deep neural network uses the transformer encoding architecture.
“The encouraging results achieved confirm the viability of Wi-Fi signals as a robust and privacy-preserving biometric modality, and position this study as a meaningful step forward in the development of signal-based Re-ID systems,” the authors say. ®
In Spain, LaLiga, the country’s top professional football league, has not only continued to block sites, it has even ignored attempts by the Vercel cloud computing service to prevent overblocking, whereby many other unrelated sites are knocked out too. As TorrentFreak reported:
the company [Vercel] set up an inbox which gave LaLiga direct access to its Site Reliability Engineering incident management system. This effectively meant that high priority requests could be processed swiftly, in line with LaLiga’s demands while avoiding collateral damage.
Despite Vercel’s attempts to give LaLiga the blocks it wanted without harming other users, the football league ignored the new management system, and continued to demand excessively wide blocks. As Walled Culture has noted, this is not some minor, fringe issue: overblocking could have serious social consequences. That’s something Cloudflare’s CEO underlined in the context of LaLiga’s actions. According to TorrentFreak, he warned:
It’s only a matter of time before a Spanish citizen can’t access a life-saving emergency resource because the rights holder in a football match refuses to send a limited request to block one resource versus a broad request to block a whole swath of the Internet.
In India, courts are granting even more powerful site blocks at the request of copyright companies. For example, the High Court in New Delhi has granted a new type of blocking order significantly called a “superlative injunction”. The same court has issued orders to five domain registrars to block a number of sites, and to do so globally – not just in India. In America, meanwhile, there are renewed efforts to bring in site blocking laws, amidst fears that these too could lead to harmful overblocking.
The pioneer of this kind of excessive site blocking is Italy, with its Piracy Shield system. As Walled Culture wrote recently, there are already moves to expand Piracy Shield that will make it worse in a number of ways. The overreach of Piracy Shield has prompted the Computer & Communications Industry Association (CCIA) to write to the European Commission, urging the latter to assess the legality of the Piracy Shield under EU law. And that, finally, is what the European Commission is beginning to do.
A couple of weeks ago, the Commission sent a letter to Antonio Tajani, Italy’s Minister of Foreign Affairs and International Cooperation. In it, the European Commission offered some comments on Italy’s notification of changes in its copyright law. These changes include “amendments in the Anti-Piracy Law that entrusted Agcom [the Italian Authority for Communications Guarantees] to implement the automated platform later called the “Piracy Shield”.” In the letter, the European Commission offers its thoughts on whether Piracy Shield complies with the Digital Services Act (DSA), one of the key pieces of legislation that regulates the online world in the EU. The Commission wrote:
The DSA does not provide a legal basis for the issuing of orders by national administrative or judicial authorities, nor does it regulate the enforcement of such orders. Any such orders, and their means of enforcement, are to be issued on the basis of the applicable Union law or national law in compliance with Union law
In other words, the Italian government cannot just vaguely invoke the DSA to justify Piracy Shield’s extended powers. The letter goes on:
The Commission would also like to emphasise that the effective tackling of illegal content must also take into due account the fundamental right to freedom of expression and information under the Charter of Fundamental Rights of the EU. As stated in Recital 39 of the DSA “[I]n that regard, the national judicial or administrative authority, which might be a law enforcement authority, issuing the order should balance the objective that the order seeks to achieve, in accordance with the legal basis enabling its issuance, with the rights and legitimate interests of all third parties that may be affected by the order, in particular their fundamental rights under the Charter”.
This is a crucial point in the context of overblocking. Shutting down access to thousands, sometimes millions of unrelated sites as the result of a poorly-targeted injunction, clearly fails to take into account “the rights and legitimate interests of all third parties that may be affected by the order”. The European Commission also has a withering comment on Piracy Shield’s limited redress mechanism for those blocked in error:
the notified draft envisages the possibility for the addressee of the order to lodge a complaint (“reclamo”) within 5 days from the notification of the order, while the order itself would have immediate effect. The Authority must then decide on these complaints within 10 days as laid down in Article 8-bis(4), 9-bis(7) and Article 10(9) of the notified draft. The Commission notes that there do not seem to be other measures available to the addressee of the order to help prevent eventual erroneous or excessive blocking of content. Furthermore, as also explained in the Reply, the technical specifications of the Piracy Shield envisage unblocking procedures limited to 24 hours from reporting in the event of an error. This limitation to 24 hours does not seem, in principle, to respond to any justified need and could lead to persisting erroneous blockings not being resolved.
The letter concludes by inviting “the Italian authorities to take into account the above comments in the final text of the notified draft and its implementation.” That “invitation” is, of course, a polite way of ordering the Italian government to fix the problems with Piracy Shield that the letter has just run through. They may be couched in diplomatic language, but the European Commission’s “comments” are in fact a serious slapdown to a bad law that seems not to be compliant with the DSA in several crucial respects. It will be interesting to see how the Italian authorities respond to this subtle but public reprimand.
Google has been ordered by a court in the U.S. state of California to pay $314 million over charges that it misused Android device users’ cellular data when they were idle to passively send information to the company.
In their lawsuit, the plaintiffs argued that Google’s Android operating system leverages users’ cellular data to transmit a “variety of information to Google” without their permission, even when their devices are kept in an idle state.
“Although Google could make it so that these transfers happen only when the phones are connected to Wi-Fi, Google instead designed these transfers so they can also take place over a cellular network,” they said.
“Google’s unauthorized use of their cellular data violates California law and requires Google to compensate Plaintiffs for the value of the cellular data that Google uses for its own benefit without their permission.”
The transfers, the plaintiffs argued, occur when Google properties are open and operating in the background, even in situations where a user has closed all Google apps, and their device is dormant, thereby misappropriating users’ cellular data allowances.
In one instance, the plaintiffs found that a Samsung Galaxy S7 device with the default settings and the standard pre-loaded apps, and connected to a new Google account, sent and received 8.88 MB/day of cellular data, out of which 94% of the communications were between Google and the device.
The information exchange happened approximately 389 times within a span of 24 hours. The transferred information mainly consisted of log files containing operating system metrics, network state, and the list of open apps.
“Log files are typically not time-sensitive, and transmission of them could easily be delayed until Wi-Fi is available,” according to court documents.
“Google could also program Android to allow users to enable passive transfers only when they are on Wi-Fi connections, but apparently it has chosen not to do so. Instead, Google has chosen to simply take advantage of Plaintiffs’ cellular data allowances.”
That’s not all. The court complaint also cited another 2018 experiment that found that an Android device that was “outwardly dormant and stationary” but had the Chrome web browser app opened and in the background resulted in about 900 passive transfers in 24 hours.
I use as many ad-blocking programs as possible, but no matter how many I install, real-life advertising is still there, grabbing my attention when I’m just trying to go for a walk. Thankfully, there may be a solution on the horizon. Software engineer Stijn Spanhove recently posted a concept video showing what real-time, real-life ad-blocking looks like on a pair of Snap Spectacles, and I really want it. Check it out:
The idea is that the AI in your smart glasses recognizes advertisements in your visual field and “edits them out’ in real time, sparing you from ever seeing what they want you to see.
While Spanhove’s video shows a red block over the offending ads, you could conceivably cover that Wendy’s ad with anything you want—an abstract painting, a photo of your family, an ad for Arby’s, etc.
he Supreme Court this morning took a chainsaw to the First Amendment on the internet, and the impact is going to be felt for decades going forward. In the FSC v. Paxton case, the Court upheld the very problematic 5th Circuit ruling that age verification online is acceptable under the First Amendment, despite multiple earlier Supreme Court rulings that said the opposite.
Justice Thomas wrote the 6-3 majority opinion, with Justice Kagan writing the dissent (joined by Sotomayor and Jackson). The practical effect: states can now force websites to collect government IDs from anyone wanting to view adult content, creating a massive chilling effect on protected speech and opening the door to much broader online speech restrictions.
Thomas accomplished this by pulling off some remarkable doctrinal sleight of hand. He ignored the Court’s own precedents in Ashcroft v. ACLU by pretending online age verification is just like checking ID at a brick-and-mortar store (it’s not), applied a weaker “intermediate scrutiny” standard instead of the “strict scrutiny” that content-based speech restrictions normally require, and—most audaciously—invented an entirely new category of “partially protected” speech that conveniently removes First Amendment protections exactly when the government wants to burden them. As Justice Kagan’s scathing dissent makes clear, this is constitutional law by result-oriented reasoning, not principled analysis.
[…]
The real danger here isn’t just Texas’s age verification law—it’s that Thomas has handed every state legislature a roadmap for circumventing the First Amendment online. His reasoning that “the internet has changed” and that intermediate scrutiny suffices for content-based restrictions will be cited in countless future cases targeting online speech. Expect age verification requirements to be attempted for social media platforms (protecting kids from “harmful” political content), for news sites (preventing minors from accessing “disturbing” coverage), and for any online speech that makes moral authorities uncomfortable.
And yes, to be clear, the majority opinion seeks to limit this just to content deemed “obscene” to avoid such problems, but it’s written so broadly as to at least open up challenges along these lines.
Thomas’s invention of “partially protected” speech, that somehow means you can burden those for which it is protected, is particularly insidious because it’s infinitely expandable. Any time the government wants to burden speech, it can simply argue that the burden is built into the right itself—making First Amendment protection vanish exactly when it’s needed most. This isn’t constitutional interpretation; it’s constitutional gerrymandering.
The conservative justices may think they’re just protecting children from pornography, but they’ve actually written a permission slip for the regulatory state to try to control online expression.
[…]
By creating his “partially protected” speech doctrine and blessing age verification burdens that would have been unthinkable a decade ago, Thomas has essentially told state governments: find the right procedural mechanism, and you can burden any online speech you dislike. Today it’s pornography. Tomorrow it will be political content that legislators deem “harmful to minors,” news coverage that might “disturb” children, or social media discussions that don’t align with official viewpoints.
The conservatives may have gotten their victory against online adult content, but they’ve handed every future administration—federal and state—a blueprint for dismantling digital free speech. They were so scared of nudity that they broke the Constitution. The rest of us will be living with the consequences for decades.
The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice.
The Danish government said on Thursday it would strengthen protection against digital imitations of people’s identities with what it believes to be the first law of its kind in Europe.
[…]
It defines a deepfake as a very realistic digital representation of a person, including their appearance and voice.
[…]
“In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI.”
He added: “Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I’m not willing to accept that.”
[…]
The changes to Danish copyright law will, once approved, theoretically give people in Denmark the right to demand that online platforms remove such content if it is shared without consent.
It will also cover “realistic, digitally generated imitations” of an artist’s performance without consent. Violation of the proposed rules could result in compensation for those affected.
The government said the new rules would not affect parodies and satire, which would still be permitted.
An interesting take on it. I am curious how this goes – defending copyright can be a very detailed thing, so what happens if someone alters someone else’s eyebrows in the deepfake by making them a mm longer? Does that invalidate the whole copyright?
A federal judge sided with Meta on Wednesday in a lawsuit brought against the company by 13 book authors, including Sarah Silverman, that alleged the company had illegally trained its AI models on their copyrighted works.
Federal Judge Vince Chhabria issued a summary judgment — meaning the judge was able to decide on the case without sending it to a jury — in favor of Meta, finding that the company’s training of AI models on copyrighted books in this case fell under the “fair use” doctrine of copyright law and thus was legal.
The decision comes just a few days after a federal judge sided with Anthropic in a similar lawsuit. Together, these cases are shaping up to be a win for the tech industry, which has spent years in legal battles with media companies arguing that training AI models on copyrighted works is fair use.
However, these decisions aren’t the sweeping wins some companies hoped for — both judges noted that their cases were limited in scope.
Judge Chhabria made clear that this decision does not mean that all AI model training on copyrighted works is legal, but rather that the plaintiffs in this case “made the wrong arguments” and failed to develop sufficient evidence in support of the right ones.
“This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful,” Judge Chhabria said in his decision. Later, he said, “In cases involving uses like Meta’s, it seems like the plaintiffs will often win, at least where those cases have better-developed records on the market effects of the defendant’s use.”
Judge Chhabria ruled that Meta’s use of copyrighted works in this case was transformative — meaning the company’s AI models did not merely reproduce the authors’ books.
Furthermore, the plaintiffs failed to convince the judge that Meta’s copying of the books harmed the market for those authors, which is a key factor in determining whether copyright law has been violated.
“The plaintiffs presented no meaningful evidence on market dilution at all,” said Judge Chhabria.
I have covered the Silverman et al case before here several times and it was retarded on all levels, which is why it was thrown out against OpenAI. Most importantly is that this judge and the judge in the Anthropic case rule that AI’s use of ingested works is transformative and not a copy. Just like when you read a book, you can recall bits of it for inspiration, but you don’t (well, most people don’t!) remember word for word what you read.
[…]As highlighted in a Reddit post, Google recently sent out an email to some Android users informing them that Gemini will now be able to “help you use Phone, Messages, WhatsApp, and Utilities on your phone whether your Gemini Apps Activity is on or off.” That change, according to the email, will take place on July 7. In short, that sounds—at least on the surface—like whether you have opted in or out, Gemini has access to all of those very critical apps on your device.
Google continues in the email, which was screenshotted by Android Police, by stating that “if you don’t want to use these features, you can turn them off in Apps settings page,” but doesn’t elaborate on where to find that page or what exactly will be disabled if you avail yourself of that setting option. Notably, when App Activity is enabled, Google stores information on your Gemini usage (inputs and responses, for example) for up to 72 hours, and some of that data may actually be reviewed by a human. That’s all to say that enabling Gemini access to those critical apps by default may be a bridge too far for some who are worried about protecting their privacy or wary of AI in general.
[…]
The worst part is, if we’re not careful, all of that information might end up being collected without our consent, or at least without our knowledge. I don’t know about you, but as much as I want AI to order me a cab, I think keeping my text messages private is a higher priority.
A federal judge in San Francisco ruled late on Monday that Anthropic’s use of books without permission to train its artificial intelligence system was legal under U.S. copyright law.
Siding with tech companies on a pivotal question for the AI industry, U.S. District Judge William Alsup said Anthropic made “fair use”
, opens new tab of books by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson to train its Claude large language model.
Alsup also said, however, that Anthropic’s copying and storage of more than 7 million pirated books in a “central library” infringed the authors’ copyrights and was not fair use. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement.
U.S. copyright law says that willful copyright infringement can justify statutory damages of up to $150,000 per work.
An Anthropic spokesperson said the company was pleased that the court recognized its AI training was “transformative” and “consistent with copyright’s purpose in enabling creativity and fostering scientific progress.”
The writers filed the proposed class action against Anthropic last year, arguing that the company, which is backed by Amazon (AMZN.O) and Alphabet (GOOGL.O), used pirated versions of their books without permission or compensation to teach Claude to respond to human prompts.
The proposed class action is one of several lawsuits brought by authors, news outlets and other copyright owners against companies including OpenAI, Microsoft (MSFT.O) and Meta Platforms (META.O) over their AI training.
The doctrine of fair use allows the use of copyrighted works without the copyright owner’s permission in some circumstances.
Fair use is a key legal defense for the tech companies, and Alsup’s decision is the first to address it in the context of generative AI.
AI companies argue their systems make fair use of copyrighted material to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry.
Anthropic told the court that it made fair use of the books and that U.S. copyright law “not only allows, but encourages” its AI training because it promotes human creativity. The company said its system copied the books to “study Plaintiffs’ writing, extract uncopyrightable information from it, and use what it learned to create revolutionary technology.”
Copyright owners say that AI companies are unlawfully copying their work to generate competing content that threatens their livelihoods.
Alsup agreed with Anthropic on Monday that its training was “exceedingly transformative.”
“Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” Alsup said.
Alsup also said, however, that Anthropic violated the authors’ rights by saving pirated copies of their books as part of a “central library of all the books in the world” that would not necessarily be used for AI training.
Anthropic and other prominent AI companies including OpenAI and Meta Platforms have been accused of downloading pirated digital copies of millions of books to train their systems.
Anthropic had told Alsup in a court filing that the source of its books was irrelevant to fair use.
“This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” Alsup said on Monday.
This makes sense to me. The training itself is much like any person reading a book and using that as inspiration. It does not copy it. And any reader should have bought (or borrowed) the book. Why Anthropic apparently used pirated copies and why they kept a seperate library of the books is beyond me .
An anonymous reader quotes a report from Ars Technica: After a court ordered OpenAI to “indefinitely” retain all ChatGPT logs, including deleted chats, of millions of users, two panicked users tried and failed to intervene. The order sought to preserve potential evidence in a copyright infringement lawsuit raised by news organizations. In May, Judge Ona Wang, who drafted the order, rejected the first user’s request (PDF) on behalf of his company simply because the company should have hired a lawyer to draft the filing. But more recently, Wang rejected (PDF) a second claim from another ChatGPT user, and that order went into greater detail, revealing how the judge is considering opposition to the order ahead of oral arguments this week, which were urgently requested by OpenAI.
The second request (PDF) to intervene came from a ChatGPT user named Aidan Hunt, who said that he uses ChatGPT “from time to time,” occasionally sending OpenAI “highly sensitive personal and commercial information in the course of using the service.” In his filing, Hunt alleged that Wang’s preservation order created a “nationwide mass surveillance program” affecting and potentially harming “all ChatGPT users,” who received no warning that their deleted and anonymous chats were suddenly being retained. He warned that the order limiting retention to just ChatGPT outputs carried the same risks as including user inputs, since outputs “inherently reveal, and often explicitly restate, the input questions or topics input.”
Hunt claimed that he only learned that ChatGPT was retaining this information — despite policies specifying they would not — by stumbling upon the news in an online forum. Feeling that his Fourth Amendment and due process rights were being infringed, Hunt sought to influence the court’s decision and proposed a motion to vacate the order that said Wang’s “order effectively requires Defendants to implement a mass surveillance program affecting all ChatGPT users.” […] OpenAI will have a chance to defend panicked users on June 26, when Wang hears oral arguments over the ChatGPT maker’s concerns about the preservation order. In his filing, Hunt explained that among his worst fears is that the order will not be blocked and that chat data will be disclosed to news plaintiffs who may be motivated to publicly disseminate the deleted chats. That could happen if news organizations find evidence of deleted chats they say are likely to contain user attempts to generate full news articles.
Wang suggested that there is no risk at this time since no chat data has yet been disclosed to the news organizations. That could mean that ChatGPT users may have better luck intervening after chat data is shared, should OpenAI’s fight to block the order this week fail. But that’s likely no comfort to users like Hunt, who worry that OpenAI merely retaining the data — even if it’s never shared with news organizations — could cause severe and irreparable harms. Some users appear to be questioning how hard OpenAI will fight. In particular, Hunt is worried that OpenAI may not prioritize defending users’ privacy if other concerns — like “financial costs of the case, desire for a quick resolution, and avoiding reputational damage” — are deemed more important, his filing said.
NB you would be pretty dense to think that anything you put into an externally hosted GPT would not be kept and used by that company for AI training and other analysis, so it’s not surprising that this data could be (and will be) requisitioned by other corporations and of course governments.
Makers of air fryers, smart speakers, fertility trackers and smart TVs have been told to respect people’s rights to privacy by the UK Information Commissioner’s Office (ICO).
People have reported feeling powerless to control how data is gathered, used and shared in their own homes and on their bodies.
After reports of air fryers designed to listen in to their surroundings and public concerns that digitised devices collect an excessive amount of personal information, the data protection regulator has issued its first guidance on how people’s personal information should be handled.
Is your air fryer spying on you? Concerns over ‘excessive’ surveillance in smart devices
It is demanding that manufacturers and data handlers ensure data security, are transparent with consumers and ensure the regular deletion of collected information.
Stephen Almond, the executive director for regulatory risk at the ICO, said: “Smart products know a lot about us: who we live with, what music we like, what medication we are taking and much more.
“They are designed to make our lives easier, but that doesn’t mean they should be collecting an excessive amount of information … we shouldn’t have to choose between enjoying the benefits of smart products and our own privacy.
“We all rightly have a greater expectation of privacy in our own homes, so we must be able to trust smart products are respecting our privacy, using our personal information responsibly and only in ways we would expect.”
The new guidance cites a wide range of devices that are broadly known as part of the “internet of things”, which collect data that needs to be carefully handled. These include smart fertility trackers that record the dates of their users’ periods and body temperature, send it back to the manufacturer’s servers and make an inference about fertile days based on this information.
Smart speakers that listen in not only to their owner but also to other members of their family and visitors to their home should be designed so users can configure product settings to minimise the personal information they collect.
Many porn sites, including Pornhub, YouPorn, and RedTube, all went dark earlier this month in France to protest a new age verification law that would have required the websites to collect ID from users. But those sites went back online Friday after a new ruling from a French court suspended enforcement of the law until it can be determined whether it conflicts with existing European Union rules, according to France24.
Aylo, the company that owns Pornhub, has previously said that requiring age verification “creates an unacceptable security risk” and warned that setting up that kind of process makes people vulnerable to hacks and leaks of sensitive information. The French law would’ve required Aylo to verify user ages with a government-issued ID or a credit card.
[…]
Age verification laws for porn websites has been a controversial issue globally, with the U.S. seeing a dramatic uptick in states passing such laws in recent years. Nineteen states now have laws that require age verification for porn sites, meaning that anyone who wants to access Pornhub in places like Florida and Texas need to use a VPN.
Australia recently passed a law banning social media use for anyone under the age of 16, regardless of explicit content, which is currently making its way through the expected challenges. The law had a 12-month buffer built in to allow the country’s internet safety regulator to figure out how to implement it. Tech giants like Meta and TikTok were dealt a blow on Friday after the commission issued a report stating that age verification “can be private, robust and effective,” though trials are ongoing about how to best make the law work, according to ABC News in Australia.
Updated July 14:The Internet-Wide Day of Action to Save Net Neutrality on July 12 enjoyed a healthy turnout.Thousands of companies and some visible tech celebrities united against the FCC proposal called Restoring Internet Freedom, by which the new FCC chairman Ajit Pai hopes to loosen regulations for the ISPs and telecom companies that provide Internet service nationwide. The public has until mid-August to give comments to the FCC.
The protests took many forms. Organizations including the American Civil Liberties Union, Reddit, The Nation, and Greenpeace placed website blockers to imitate what would happen if the FCC loosened regulations. Other companies participating online displayed images on their sites that simulated a slowed-down Internet, or demanded extra money for faster access.
Haley Velasco/IDGFor the July 12 Internet-Wide Day of Action advocating net neutrality, sites including The Nation displayed images showing people what the web would be like if corporations operated it for a profit.
Tech giant Google published a blog post in defense of net neutrality. “Today’s open internet ensures that both new and established services, whether offered by an established internet company like Google, a broadband provider or a small startup, have the same ability to reach users on an equal playing field.”
Melissa Riofrio/IDGFacebook COO Sheryl Sandberg posted to her page about net neutrality as part of the July 12 Internet-Wide Day of Action.
Facebook joined in with Sheryl Sandberg posting her message on Facebook as well as Facebook CEO Mark Zuckerberg.“Keeping the internet open for everyone is crucial. Not only does it promote innovation, but it lets people access information that can change their lives and gives voice to those who might not otherwise be heard,” Sandberg said.
In Washington, FCC Commissioner Mignon Clyburn said in a statement that she supports a free and open internet. “Its benefits can be felt across our economy and around the globe,” she said. “That is why I am excited that on this day consumers, entrepreneurs and companies of all sizes, including broadband providers and internet startups, are speaking out with a unified voice in favor of strong net neutrality rules grounded in Title II. Knowing that the arc of success is bent in our favor and we are on the right side of history, I remain committed to doing everything I can to protect the most empowering and inclusive platform of our time.”
Sen. Ron Wyden, D-Ore., and Sen. Brian Schatz, D-Hawaii, wrote a letter to the FCC Tuesday – one day early — to make sure the FCC’s system was ready to withstand a cyberattack, as well as the large volume of calls expected Wednesday.
What led up to the protest
The July 12 Internet-Wide Day of Action strove to highlight how the web would look if telecom companies were allowed to control it for profit. Organizing groups such as Fight for the Future, Free Press Action Fund, and Demand Progress want their actions to call attention to the potential impact on everyday users, such as having to pay for faster internet access.
Where net neutrality stands: Under the Open Internet Order enacted by the FCC in 2015, internet service providers cannot block access to content on websites or apps, interfere with loading speeds, or provide favoritism to those who pay extra. However, FCC Chairman Ajit Pai, selected by President Trump in January, has been advocating a completely open internet, where the ISPs could control access or charge fees without regulation. A Senate bill that would relax regulations, called Restoring Internet Freedom (S.993), was introduced in May and was referred to the Committee on Commerce, Science, and Transportation.
What this protest is for: The July 12 protest, which organizers are calling the Internet-Wide Day of Action to Save Net Neutrality, will fight for free speech on the internet under Title II of FCC’s Communications Act of 1934. On that date, websites and apps that support net neutrality will display alerts to mimic what could happen if the FCC rolled back the rules.
Who will come together for the protest: More than 180 companies including Amazon, Twitter, Etsy, OkCupid, and Vimeo, along with advocacy groups such as the ACLU, Change.org, and Greenpeace, will join the protest and urge their users and followers to do the same.
Where the protest will take place: Sites that support net neutrality will call attention to their cause by simulating what users would experience if telecom companies were allowed to control web access. Examples will include a simulated “spinning wheel of death” (when a webpage or app won’t load), blocked notifications, and requests to upgrade to paid plans. Organizers are also calling on supporters to stage in-person protests at congressional offices and post protest selfies on social media with the tag #savethenet.
Who opposes the protest: FCC Chairman Ajit Pai and large telecom companies, such as Verizon and Comcast, want to relax net neutrality rules. Some claim that an unregulated internet will allow for more competition in the marketplace, as well as oversight of privacy and security measures.
Why this protest matters: The July 12 protest is projected to be one of the largest digital protests ever planned, with more than 50,000 people, sites, and organizations participating. If successful, it would be reminiscent of a 2012 blackout for freedom of speech on the internet to protest the Stop Online Piracy Act and the PROTECT IP Act, and an internet slowdown in 2014 to demand discussions about net neutrality.
In less than three months’ time, almost no civil servant, police officer or judge in Schleswig-Holstein will be using any of Microsoft’s ubiquitous programs at work.
Instead, the northern state will turn to open-source software to “take back control” over data storage and ensure “digital sovereignty”, its digitalisation minister, Dirk Schroedter, told AFP.
“We’re done with Teams!” he said, referring to Microsoft’s messaging and collaboration tool and speaking on a video call — via an open-source German program, of course.
The radical switch-over affects half of Schleswig-Holstein’s 60,000 public servants, with 30,000 or so teachers due to follow suit in coming years.
The state’s shift towards open-source software began last year.
The current first phase involves ending the use of Word and Excel software, which are being replaced by LibreOffice, while Open-Xchange is taking the place of Outlook for emails and calendars.
Over the next few years, there will also be a switch to the Linux operating system in order to complete the move away from Windows.
[…]
“The geopolitical developments of the past few months have strengthened interest in the path that we’ve taken,” said Schroedter, adding that he had received requests for advice from across the world.
“The war in Ukraine revealed our energy dependencies, and now we see there are also digital dependencies,” he said.
The government in Schleswig-Holstein is also planning to shift the storage of its data to a cloud system not under the control of Microsoft, said Schroedter.
In an interview with Danish broadsheet newspaper Politiken [Danish], Caroline Olsen, the country’s Minister for Digital Affairs, said she is planning to lead by example and start removing Microsoft software and tools from the ministry. The minister told Jutland’s Nordyske [🇩🇰 Danish, but not paywalled] the plan is that half the staff’s computers – including her own – would have LibreOffice in place of Microsoft Office 365 in the first month, with the goal of total replacement by the end of the year.
Given that earlier this year, US President Donald Trump was making noises about taking over Greenland, an autonomous territory of Denmark, it seems entirely understandable for the country to take a markedly increased interest in digital sovereignty – as Danish Ruby guru David Heinemeier Hansson explained just a week ago.
[…]
The more pressing problem tends to be groupware – specifically, the dynamic duo of Outlook and Exchange, as Bert Hubert told The Register earlier this year. Several older versions go end-of-life soon, along with Windows 10. Modernizing is expensive, which makes migrating look more appealing.
A primary alternative to Redmond, of course, is Mountain View. Google’s offerings can do the job. In December 2021, the Nordic Choice hotel group was hit by Conti ransomware, but rather than pay to regain access to its machines, it switched to ChromeOS.
The thing is, this is jumping from one US-based option to another. That’s why France rejected both a few years ago, and we reported on renewed EU interest early the following year. Such things may be why French SaaS groupware offering La Suite numérique is looking quite complete and polished these days.
EU organizations can host their own cloud office suite thanks to Collabora’s CODE, which runs LibreOffice on an organization’s own webservers – easing deployment and OS migration.
Not content to wait for open letters to influence the European Commission, Dutch parliamentarians have taken matters into their own hands by passing eight motions urging the government to ditch US-made tech for homegrown alternatives.
With each IT service our government moves to American tech giants, we become dumber and weaker…
The motions were submitted and all passed yesterday during a discussion in the Netherlands’ House of Representatives on concerns about government data being shipped overseas. While varied, they all center on the theme of calling on the government to replace software and hardware made by US tech companies, acquire new contracts with Dutch companies who offer similar services, and generally safeguard the country’s digital sovereignty.
“With each IT service our government moves to American tech giants, we become dumber and weaker,” Dutch MP Barbara Kathmann, author of four of the motions, told The Register. “If we continue outsourcing all of our digital infrastructure to billionaires that would rather escape Earth by building space rockets, there will be no Dutch expertise left.”
Kathmann’s measures specifically call on the government to stop the migration of Dutch information and communications technology to American cloud services, the creation of a Dutch national cloud, the repatriation of the .nl top-level domain to systems operating within the Netherlands, and for the preparation of risk analyses and exit strategies for all government systems hosted by US tech giants. The other measures make similar calls for eliminating the presence of US tech companies in government systems and the preference of local alternatives.
“We have identified the causes of our full dependency on US services,” Kathmann told us. “We have to start somewhere – by pausing all thoughtless migrations to American hyperscalers, new opportunities open up for Dutch and European providers.”
The motions passed by the Dutch parliament come as the Trump administration ratchets up tensions with a number of US allies – the EU among them. Nearly 100 EU-based tech companies and lobbyists sent an open letter to the European Commission this week asking it to find a way to divest the bloc from systems managed by US companies due to “the stark geopolitical reality Europe is now facing.”
The only question is, how did the retards in charge of procurement allow themselves to buy 100% US and closed source vendor lock-in in the first place, gutting the EU software development market?
Last month, ahead of the launch of the Switch 2 and its GameChat communication features, Nintendo updated its privacy policy to note that the company “may also monitor and record your video and audio interactions with other users.” Now that the Switch 2 has officially launched, we have a clearer understanding of how the console handles audio and video recorded during GameChat sessions, as well as when that footage may be sent to Nintendo or shared with partners, including law enforcement. Before using GameChat on Switch 2 for the first time, you must consent to a set of GameChat Terms displayed on the system itself. These terms warn that chat content is “recorded and stored temporarily” both on your system and the system of those you chat with. But those stored recordings are only shared with Nintendo if a user reports a violation of Nintendo’s Community Guidelines, the company writes.
That reporting feature lets a user “review a recording of the last three minutes of the latest three GameChat sessions” to highlight a particular section for review, suggesting that chat sessions are not being captured and stored in full. The terms also lay out that “these recordings are available only if the report is submitted within 24 hours,” suggesting that recordings are deleted from local storage after a full day. If a report is submitted to Nintendo, the company warns that it “may disclose certain information to third parties, such as authorities, courts, lawyers, or subcontractors reviewing the reported chats.” If you don’t consent to the potential for such recording and sharing, you’re prevented from using GameChat altogether.
Nintendo is extremely clear that the purpose of its recording and review system is “to protect GameChat users, especially minors” and “to support our ability to uphold our Community Guidelines.” This kind of human moderator review of chats is pretty common in the gaming world and can even apply to voice recordings made by various smart home assistants. […] Overall, the time-limited, local-unless-reported recordings Nintendo makes here seem like a minimal intrusion on the average GameChat user’s privacy. Still, if you’re paranoid about Nintendo potentially seeing and hearing what’s going on in your living room, it’s good to at least be aware of it.
The United States government has collected DNA samples from upwards of 133,000 migrant children and teenagers—including at least one 4-year-old—and uploaded their genetic data into a national criminal database used by local, state, and federal law enforcement, according to documents reviewed by WIRED. The records, quietly released by the US Customs and Border Protection earlier this year, offer the most detailed look to date at the scale of CBP’s controversial DNA collection program. They reveal for the first time just how deeply the government’s biometric surveillance reaches into the lives of migrant children, some of whom may still be learning to read or tie their shoes—yet whose DNA is now stored in a system originally built for convicted sex offenders and violent criminals.
[…]
Spanning from October 2020 through the end of 2024, the records show that CBP swabbed the cheeks of between 829,000 and 2.8 million people, with experts estimating that the true figure, excluding duplicates, is likely well over 1.5 million. That number includes as many as 133,539 children and teenagers. These figures mark a sweeping expansion of biometric surveillance—one that explicitly targets migrant populations, including children.
[…]
Under current rules, DNA is generally collected from anyone who is also fingerprinted. According to DHS policy, 14 is the minimum age at which fingerprinting becomes routine.
[…]
“Taking DNA from a 4-year old and adding it into CODIS flies in the face of any immigration purpose,” she says, adding, “That’s not immigration enforcement. That’s genetic surveillance.”
In 2024, Glaberson coauthored a report called “Raiding the Genome” that was the first to try to quantify DHS’s 2020 expansion of DNA collection. It found that if DHS continues to collect DNA at the rate the agency itself projects, one-third of the DNA profiles in CODIS by 2034 will have been taken by DHS, and seemingly without any real due process—the protections that are supposed to be in place before law enforcement compels a person to hand over their most sensitive information.