Futurehome Breaks IoT Devices Unless A New Subscription Is Paid For

[…]It’s bad enough when a company goes fully kablooey, has to shut down all their backend servers and gear, and renders their products useless. That sucks, there are ways around it, and it shouldn’t be allowed, but it’s quite different than perfectly healthy companies selling a product that has features and capabilities out of the box, only to claw back those capabilities and either shut them down or stick them behind some subscription paywall.

And that latter of those examples is what is happening again, this time from Futurehome, which makes a series of smarthome IoT products.

Launched in 2016, Futurehome’s Smarthub is marketed as a central hub for controlling Internet-connected devices in smart homes. For years, the Norwegian company sold its products, which also include smart thermostats, smart lighting, and smart fire and carbon monoxide alarms, for a one-time fee that included access to its companion app and cloud platform for control and automation. As of June 26, though, those core features require a 1,188 NOK (about $116.56) annual subscription fee, turning the smart home devices into dumb ones if users don’t pay up.

“You lose access to controlling devices, configuring; automations, modes, shortcuts, and energy services,” a company FAQ page says.

You also can’t get support from Futurehome without a subscription. “Most” paid features are inaccessible without a subscription, too, the FAQ from Futurehome, which claims to be in 38,000 households, says.

That would be potentially nearly a decade of a bought product working one way, only to have its core functionality tucked behind a subscription paywall on the whim of the company. This is one of those situations that, and I don’t care what country you live in, should elicit the common sense reaction of: this shouldn’t be fucking legal. But, due to the apathy of government and the steady erosion of anything remotely representing true consumer protection, this sort of thing is happening more and more frequently.

And it’s not as though all of this functionality requires support from backend company assets, either. Some do, sure, but some of the features that suddenly don’t work appear to have nothing to do with centralized corporate servers or services.

[…]

As you’d expect, some people are attempting to figure out how to make Futurehome products work without the subscription. Perhaps as a result of that, Futurehome shut down its own user forum in June. In addition, the CEO is complaining about how the company now has to invest time and resources to fight its own customers’ attempts to make the products they bought work like they did at the time of purchase.

Futurehome has fought efforts to crack its firmware, with CEO Øyvind Fries telling Norwegian consumer tech website Tek.no, per a Google translation, “It is regrettable that we now have to spend time and resources strengthening the security of a popular service rather than further developing functionality for the benefit of our customers.”

But is it as regrettable as your own customers suddenly finding out the thing they bought won’t work anymore because your company didn’t business well enough?

Source: Smart Home Device Maker Renders Devices Dumb Unless A New Subscription Is Paid For | Techdirt

French city of Lyon ditching Microsoft for FOSS

The République’s third-largest city and second-largest economic hub on Tuesday cited a desire to reduce dependence on American software, extend the lifespan of its hardware and therefore reduce its environmental impact, and strengthen the technological sovereignty of its public service.

Achieving those goals will see Lyon’s government, which serves over a million people, replace Office with OnlyOffice, a package developed by Latvia-based Ascensio Systems and made available under version 3 of the GNU Affero General Public License.

The municipality also plans to adopt a collaboration suite called “Territoire Numerique Ouvert” – Open Digital Territory – for videoconferencing and office automation tasks.

France’s L’Agence nationale de la cohésion des territoires – an agency that promotes industry development in the country’s regions – awarded a €2 million ($2.3 million) grant to help develop the suite and get it running in local datacenters. Nine French communities already use the suite, which has several thousand individual users.

[…]

Lyon’s government employs almost 10,000 people, so losing it as a customer will briefly sting some regional Microsoft salespeople and partners but won’t make a noticeable dent in the software giant’s balance sheet.

However the city’s decision comes just weeks after Denmark’s’ Ministry for Digitalization decided to drop Microsoft and amid a European Union push to develop sovereign digital capabilities that has seen the likes of Microsoft and AWS try to reassure European customers that their cloudy continental outposts can’t be caught up in US claims to possess extraterritorial jurisdiction over data stored in facilities owned by American companies.

So maybe Lyon ditching Microsoft represents one more snowball in a growing avalanche. ®

Source: French city of Lyon ditching Microsoft for FOSS • The Register

FreeTube – The Private YouTube Client

FreeTube is a YouTube client for Windows (10 and later), Mac (macOS 11 and later), and Linux built around using YouTube more privately. You can enjoy your favorite content and creators without your habits being tracked. All of your user data is stored locally and never sent or published to the internet. FreeTube grabs data by scraping the information it needs (with either local methods or by optionally utilizing the Invidious API). With many features similar to YouTube, FreeTube has become one of the best methods to watch YouTube privately on desktop.

Source: FreeTube – The Private YouTube Client

Google AI is watching — how to turn off Gemini on Android

[…]Why you shouldn’t trust Gemini with your data

Gemini promises to simplify how you interact with your Android — fetching emails, summarizing meetings, pulling up files. But behind that helpful facade is an unprecedented level of centralized data collection, powered by a company known for privacy washing, (new window)misleadin(new window)g users(new window) about how their data is used, and that was hit with $2.9 billion in fines in 2024 alone, mostly for privacy violations and antitrust breaches.

Other people may see your sensitive information

Even more concerning, human reviewers may process your conversations. While Google claims these chats are disconnected from your Google account before review, that doesn’t mean much when a simple prompt like “Show me the email I sent yesterday” might return personal data like your name and phone number.

Your data may be shared beyond Google

Gemini may also share your data with third-party services. When Gemini interacts with other services, your data gets passed along and processed under their privacy policies, not just Google’s. Right now, Gemini mostly connects with Google services, but integrations with apps like WhatsApp and Spotify are already showing up. Once your data leaves Google, you cannot control where it goes or how long it’s kept.

The July 2025 update keeps Gemini connected without your consent

Before July, turning off Gemini Apps Activity automatically disabled all connected apps, so you couldn’t use Gemini to interact with other services unless you allowed data collection for AI training and human review. But Google’s July 7 update changed this behavior and now keeps Gemini connected to certain services — such as Phone, Messages, WhatsApp, and Utilities — even if activity tracking is off.

While this might sound like a privacy-conscious change — letting you use Gemini without contributing to AI training — it still raises serious concerns. Google has effectively preserved full functionality and ongoing access to your data, even after you’ve opted out.

Can you fully disable Gemini on Android?

No, and that’s by design.

[…]

How to turn off Gemini AI on Android

  1. Open the Gemini app on your Android.
  2. Tap your profile icon in the top-right corner.
  3. Go to Gemini Apps Activity*.
  1. Tap Turn offTurn off and delete activity, and follow the prompts.
  1. Select your profile icon again and go to Apps**.
  1. Tap the toggle switch to prevent Gemini from interacting with Google apps and third-party services.

*Gemini Apps Activity is a setting that controls whether your interactions with Gemini are saved to your Google account and used to improve Google’s AI systems. When it’s on, your conversations may be reviewed by humans, stored for up to 3 years, and used for AI training. When it’s off, your data isn’t used for AI training, but it’s still stored for up to 72 hours so Google can process your requests and feedback.

**Apps are the Google apps and third-party services that Gemini can access to perform tasks on your behalf — like reading your Gmail, checking your Google Calendar schedule, retrieving documents from Google Drive, playing music via Spotify, or sending messages on your behalf via WhatsApp. When Gemini is connected to these apps, it can access your personal content to fulfill prompts, and that data may be processed by Google or shared with the third-party app according to their own privacy policies.

Source: Google AI is watching — how to turn off Gemini on Android | Proton

Sodium fuel cell could enable electric aviation, 3x more energy density than battery, sucks up CO2

Instead of a battery, the new concept is a kind of fuel cell — which is similar to a battery but can be quickly refueled rather than recharged. In this case, the fuel is liquid sodium metal, an inexpensive and widely available commodity. The other side of the cell is just ordinary air, which serves as a source of oxygen atoms. In between, a layer of solid ceramic material serves as the electrolyte, allowing sodium ions to pass freely through, and a porous air-facing electrode helps the sodium to chemically react with oxygen and produce electricity.

In a series of experiments with a prototype device, the researchers demonstrated that this cell could carry more than three times as much energy per unit of weight as the lithium-ion batteries used in virtually all electric vehicles today. Their findings are being published today in the journal Joule, in a paper by MIT doctoral students Karen Sugano, Sunil Mair, and Saahir Ganti-Agrawal; professor of materials science and engineering Yet-Ming Chiang; and five others.

[…]

this technology does appear to have the potential to be quite revolutionary, he suggests. In particular, for aviation, where weight is especially crucial, such an improvement in energy density could be the breakthrough that finally makes electrically powered flight practical at significant scale.

“The threshold that you really need for realistic electric aviation is about 1,000 watt-hours per kilogram,” Chiang says. Today’s electric vehicle lithium-ion batteries top out at about 300 watt-hours per kilogram — nowhere near what’s needed. Even at 1,000 watt-hours per kilogram, he says, that wouldn’t be enough to enable transcontinental or trans-Atlantic flights.

[…]

A great deal of research has gone into developing lithium-air or sodium-air batteries over the last three decades, but it has been hard to make them fully rechargeable. “People have been aware of the energy density you could get with metal-air batteries for a very long time, and it’s been hugely attractive, but it’s just never been realized in practice,” Chiang says.

By using the same basic electrochemical concept, only making it a fuel cell instead of a battery, the researchers were able to get the advantages of the high energy density in a practical form. Unlike a battery, whose materials are assembled once and sealed in a container, with a fuel cell the energy-carrying materials go in and out.

[…]

Tests using an air stream with a carefully controlled humidity level produced a level of more than 1,500 watt-hours per kilogram at the level of an individual “stack,” which would translate to over 1,000 watt-hours at the full system level, Chiang says.

The researchers envision that to use this system in an aircraft, fuel packs containing stacks of cells, like racks of food trays in a cafeteria, would be inserted into the fuel cells; the sodium metal inside these packs gets chemically transformed as it provides the power. A stream of its chemical byproduct is given off, and in the case of aircraft this would be emitted out the back, not unlike the exhaust from a jet engine.

But there’s a very big difference: There would be no carbon dioxide emissions. Instead the emissions, consisting of sodium oxide, would actually soak up carbon dioxide from the atmosphere. This compound would quickly combine with moisture in the air to make sodium hydroxide — a material commonly used as a drain cleaner — which readily combines with carbon dioxide to form a solid material, sodium carbonate, which in turn forms sodium bicarbonate, otherwise known as baking soda.

[…]

Using sodium hydroxide to capture carbon dioxide has been proposed as a way of mitigating carbon emissions, but on its own, it’s not an economic solution because the compound is too expensive. “But here, it’s a byproduct,” Chiang explains, so it’s essentially free, producing environmental benefits at no cost.

Importantly, the new fuel cell is inherently safer than many other batteries, he says. Sodium metal is extremely reactive and must be well-protected. As with lithium batteries, sodium can spontaneously ignite if exposed to moisture. “Whenever you have a very high energy density battery, safety is always a concern, because if there’s a rupture of the membrane that separates the two reactants, you can have a runaway reaction,” Chiang says. But in this fuel cell, one side is just air, “which is dilute and limited. So you don’t have two concentrated reactants right next to each other. If you’re pushing for really, really high energy density, you’d rather have a fuel cell than a battery for safety reasons.”

While the device so far exists only as a small, single-cell prototype, Chiang says the system should be quite straightforward to scale up to practical sizes for commercialization. Members of the research team have already formed a company, Propel Aero, to develop the technology. The company is currently housed in MIT’s startup incubator, The Engine.

[…]

Source: New fuel cell could enable electric aviation | MIT News | Massachusetts Institute of Technology

Orthokeratology – contacts you wear at night that reshape your cornea so you don’t have to wear glasses or contacts by day

Orthokeratology, also referred to as ortho-k, is a noninvasive and nonsurgical process, during which specially designed contacts are fitted to a patient. This process temporarily reshapes the cornea to improve vision. It is often compared to dental braces, which are used to reshape teeth much as ortho-k is used to reshape the cornea.

While these improvements to your vision are reversible, they can be maintained as long as you wear the contacts as directed.

Ortho-k is primarily used to improve myopia: i.e., near-sightedness. Other methods of correcting myopia include wearing eyeglasses, regular contact lenses, laser eye surgery (also known as LASIK), or photorefractive keratectomy (also known as PRK).

Since both LASIK and PRK are surgical methods, some patients prefer to forgo those procedures and instead undergo nonsurgical corrections such as ortho-k. This process allows patients freedom from wearing their glasses and contact lenses all the time without having to have surgery.

Since there is no orthokeratology age limit, sometimes, ortho-k is suggested to improve a child’s vision. Since vision continues to change into early adulthood for some children, surgical procedures such as LASIK and PRK are not recommended for children.

[…]

Source: What Is Orthokeratology?

How the EU allowed Big Tech to sideline everyone else to weaken the EU AI act for US profit and citizens detriment

“The current draft,” Meta wrote in a confidential lobby paper, is a case of “regulatory overreach” that “poses a significant threat to AI innovation in the EU.”

It was early 2025, and the text Meta railed against was the second draft of the EU’s Code of Practice. The Code will put the EU’s AI Act into operation by outlining voluntary requirements for general-purpose AI, or models with many different societal applications (see Box 1).

Meta’s lobby message hit the right notes, as the second von der Leyen Commission has committed to slashing regulations to stimulate European ‘competitiveness’. An early casualty of this deregulatory drive was the EU’s AI Liability Directive, which would have allowed consumers to claim compensation for harms caused by AI.

And the Code may end up being another casualty. Meta’s top lobbyist said they would not sign unless there were significant changes. Google cast doubt on its participation.

But as this investigation by Corporate Europe Observatory and Lobby Control – based on insider interviews and analysis of lobby papers – reveals, Big Tech enjoyed structural advantages from early on in the process and – playing its cards well – successfully lobbied for a much weaker Code than could have been. That means weaker protection from potential structural biases and social harms caused by AI.

Potemkin participation: how civil society was sidelined

In a private meeting with the Commission in January 2025, Google “raised concerns about the process” of drafting the Code of Practice. The tech giant complained “model developers [were] heavily outweighed by other stakeholders”.

Only a superficial reading could support this. Over 1,000 of stakeholders expressed interest in participating to the EU’s AI Office, a newly created unit within the European Commission’s DG CNECT. Nearly four hundred organisations were approved.

But tech companies enjoyed far more access than others. Model providers – companies developing the large AI models the Code is expected to regulate – were invited to dedicated workshops with the working group chairs.

“This could be seen as a compromise,” Jimmy Farrell of the European think tank Pour Demain said. “On the one hand, they included civil society, which the AI Act did not make mandatory. On the other, they gave model providers direct access.”

Tech companies enjoyed far more access than others. Model providers were invited to dedicated workshops with the working group chairs.

Fifteen US companies, or nearly half of the total, were on the reported list of organisations invited to the model providers workshops. Among them, US tech giants Google, Microsoft, Meta, Apple, and Amazon.

Others included AI “start-ups” with multi-billion dollar valuations such as OpenAI, Anthropic, and Hugging Face, each of which receive Big Tech funding. Another, Softbank, is OpenAI’s lead partner for the US$500 billion Stargate investment fund.

Meeting between Commissioner McGrath and OpenAI lobbyist Lehane

In April, OpenAI dialed up its lobbying to water down the Code of Practice with a series of meetings with European politicians. Right: OpenAI’s main lobbyist Chris Lehane. Left: EU Commissioner Michael McGrath

EC – Audiovisual Service

Several European AI providers, which lobbied over the AI Act, were also involved. Some of these also partner with American tech firms, like the French Mistral AI or the Finnish SiloAI.

The participation of the other 350 organisations – which include rights advocates, civil society organisations, representatives of European corporations and SMEs, and academics – was more restricted. They had no access to the provider workshops, and despite a commitment to do so, sources said meeting minutes from the model providers workshops were not distributed to participants.

It put civil society, which participated in working group meetings and crowded plenaries, at a disadvantage. Opportunities for interaction during meetings were limited. Questions needed to be submitted beforehand through a platform called SLIDO, which others could then up-vote.

Normally, the AI Office would consider the top ten questions during meetings, although sources told us, “controversial questions would sometimes be side-stepped”. Participants could neither submit comments during meetings, nor unmute themselves.

[…]

In the absence of full list of individual participants, which she requested but not received, Pfister Fetz would “write down every name she saw on the screen” and look people up after, “to see if they were like-minded or not.”

Participants received little notice to review and comment on draft documents with short deadlines. Deadlines to apply for a speaking slot to discuss a document would come before said document had even been shared. The third draft of the Code was delayed for nearly a month, without communication from the AI Office, until one day, without notice, it landed in participants’ mailboxes.

[…]

A long-standing demand from civil society was a dedicated civil society workshop. It was only after the third severely watered down Code of Practice draft that such a workshop took place.

“They had many workshops with model providers, and only one at the end with civil society, when they told us there would only be minor changes possible,” van der Geest, the fundamental rights advocate, said. “It really shows how they see civil society input: as secondary at best.”

Partnering with Big Tech and the AI office: a conflict of interest?

A contract to support the AI Office in drafting the Code of Practice was awarded, under an existing framework contract, to a consortium of external consultants – Wavestone, Intellera, and the Centre for European Policy Studies (CEPS).

It was previously reported that the lead partner, the French firm Wavestone, advised companies on AI Act compliance, but “does not have [general purpose AI] model providers among its clients”.

But our investigation revealed that the consultants do have ties to model providers.

In 2023 Wavestone announced it had been “selected by Microsoft to support the deployment and accelerated adoption of Microsoft 365 Copilot as a generative artificial intelligence tool in French companies.”

This resulted in Wavestone receiving a “Microsoft Partner of the Year Award” at the end of 2024, when it already supported the AI Office in developing the Code. The consultancy also worked with Google Cloud and is an AWS partner.

The other consortium partners also had ties to GPAI model providers. The Italian consultancy Intellera was bought in April 2024 by Accenture and is now “Part of Accenture Group”. Accenture boasted at the start of 2025 that they were “a key partner” to a range of technology providers, including Amazon, Google, IBM, Microsoft, and NVIDIA – in other words, US general purpose model providers.

The third and final consortium partner, CEPS, counted all Big Tech among corporate members – including Apple, AWS, Google, Meta, Microsoft. At a rate of between €15,000 – €30,000 EUR (plus VAT) per year, members get “access to task forces” on EU policy and “input on CEPS research priorities”.

The problem is that these consultancy firms can hardly be expected to advise the Commission to take action that would negatively impact their own clients. The EU Financial Regulation states that the Commission should therefore reject a contractor where a conflicting interest “can affect or risk the capacity to perform the contract in an independent, impartial and objective manner”.

Also the 2022 framework contract under which the consortium was initially hired by the European Commission stipulated that “a contractor must take all the necessary measures to prevent any situation of conflict of interest.”

[…]

On key issues, the messaging of the US tech firms was well coordinated. Confidential lobby papers by Microsoft and Google, submitted to EU members states and seen by Corporate Europe Observatory and LobbyControl, echoed what Meta said publicly – that the Code’s requirements “go beyond the scope of the AI Act” and would “undermine” or “stifle” innovation.

It was a position carefully crafted to match the political focus on deregulation.

“The current Commission is trying to be innovation and business friendly, but is actually disproportionately benefiting Big Tech” said Risto Uuk, Head of EU Policy and Research from the Future of Life Institute.

Uuk, who curates a biweekly newsletter on the EU AI Act, added that “there is also a lot of pressure on the EU from the Trump administration not to enforce regulation.”

[…]

One of the most contentious topics has been the risk taxonomy. This determines the risks model providers will need to test for and mitigate. The second draft of the Code introduced a split between “systemic risks,” such as nuclear risks or a loss of human oversight, and a much weaker category of “additional risks for consideration”.

“Providers are mandated to identify and mitigate systemics risks,” Article 19’s Dinah van der Geest said, “but the second tier, including risks fundamental rights, democracy, or the environment, are optional for providers to follow.”

These risks are far from hypothetical. From Israeli mass surveillance and killing of Palestinians in Gaza, the dissemination of disinformation during elections including by far-right groups and foreign governments, to massive lay-offs of US federal government employees, generative AI is already used in countless problematic ways. In Europe, investigative journalism has exposed the widespread use of biased AI systems in welfare systems.

The introduction of a hierarchy in the risk taxonomy offered additional lobby opportunities. Both Google and Microsoft argued that “large-scale, illegal discrimination” needed to be bumped down to optional risks.

[…]

The tech giants got their way: in the third draft, large-scale, illegal discrimination was removed from the list of systemic risks, which are mandatory to check for, and categorised under “other types of risk for potential consideration”.

Like other fundamental rights violations, it now only needs to be checked for if “it can be reasonably foreseen” and if the risk is “specific to the high-impact capabilities” of the model.

“But what is foreseeable?” asked Article 19’s Dinah van der Geest. “It will be left up to the model providers to decide.”

[…]

At the AI Action Summit in Paris in February 2025, European Commission President Ursula von der Leyen had clearly drunk the AI Kool-Aid: “We want Europe to be one of the leading AI continents. And this means embracing a way of life where AI is everywhere.” She went on to paint AI as a silver bullet for almost every societal problem: “AI can help us boost our competitiveness, protect our security, shore up public health, and make access to knowledge and information more democratic.”

The Code of Practice seems to be only one of the first casualties of the Commission’s deregulatory offensive. With key rules on AI, data protection, and privacy up for review this year, the main beneficiaries are poised to be the corporate interests with endless lobbying resources.

The AI Action Summit marked a distinctive shift in the Commission’s discourse. Where previously the Commission paid at least lip-service to safeguarding fundamental rights when rolling out AI, it now largely abandoned that discourse talking about winning “the global race for AI” instead.

At the same summit, Henna Virkkunen, the Commissioner for Tech Sovereignty, was quick to parrot von der Leyen’s message, announcing that the AI Act would be implemented ‘innovation-friendly’, and after criticism from Meta and Google a week earlier, she promised that the Code of Practice would not create “any extra burden”.

Ursula von der Leyen speeching at the AI Action Summit

Ursula von der Leyen at the AI Action Summit. In the background on the right Google CEO Sundar Pichai.

EC – Audiovisual Service

Big Tech companies have quickly caught on to the new deregulatory wind in Brussels. They have ramped up their already massive lobbying budgets and have practiced their talking points about Europe’s ‘competitiveness’ and ‘over-regulation’.

The Code of Practice on General-Purpose AI seems to be only one of the first casualties of this deregulatory offensive. With key rules on AI, data protection, and privacy up for review this year, the main beneficiaries are poised to be the corporate interests with endless lobbying resources.

[…]

Big Tech cannot be seen as just another stakeholder. The Commission should safeguard the public interest from Big Tech influence. Instead of beating the deregulation drum, the Commission should now stand firm against the tech industry’s agenda and guarantee the protection of fundamental rights through an effective Code of Conduct.

Source: Coded for privileged access | Corporate Europe Observatory

computer chip Vagus nerve stimulation receives US approval to treat arthritis

The US Food and Drug Administration (FDA) has approved a vagus nerve stimulator for rheumatoid arthritis – the first such device to be cleared for an autoimmune condition, potentially paving the way for broader uses.

The pill-sized device is surgically implanted along the vagus nerve – a bundle of nerve fibres connecting the brain to most vital organs – in the side of the neck. For up to a decade, it then automatically delivers electrical pulses that stimulate the nerve and reduce inflammation.

Rheumatoid arthritis, like other autoimmune conditions, causes the body to attack its own tissues, triggering excessive inflammation that leads to pain, swelling and even organ damage. It is usually treated with powerful anti-inflammatory drugs that suppress the immune system, raising the risk of infections and cancer. Nearly three-quarters of people with rheumatoid arthritis are unhappy with current treatments and many stop taking them due to side effects.

In a clinical trial of 242 people with moderate to severe rheumatoid arthritis, about 35 per cent of those who received vagus nerve stimulation for 12 weeks saw at least a 20 per cent reduction in symptoms, compared with 24 per cent of those who didn’t receive the treatment. Less than 2 per cent experienced serious side effects, and none of them developed a serious infection.

“The idea of using a safe computer chip instead of expensive, minimally effective drugs with severe side effects should be an attractive option for many patients,” says Kevin Tracey at the Feinstein Institutes for Medical Research in New York. He developed the device about two decades ago as part of the US health technology company SetPoint Medical, though he is no longer with the business.

This approval marks a significant step towards one day using vagus nerve stimulation to treat a range of inflammation-related conditions, including heart failure, diabetes and even neurodegenerative conditions like Parkinson’s, says Stavros Zanos at the Feinstein Institutes of Medical Research, a New York-based research center. SetPoint Medical’s device is already in clinical trials for multiple sclerosis and inflammatory bowel disease.

Source: Vagus nerve stimulation receives US approval to treat arthritis | New Scientist

Google lost its antitrust appeal with Epic

Google’s attempt to appeal the decision in Epic v. Google has failed. In a newly released opinion, the Ninth Circuit Court of Appeals has decided to uphold the original Epic v. Google lawsuit that found that Google’s Play Store and payment systems are monopolies.

The decision means that Google will have to abide by the remedies of the original lawsuit, which limits the company’s ability to pay phone makers to preinstall the Play Store, prevents it from requiring developers to use its payment systems and forces it to open up Android to third-party app stores. Not only will Google have to allow third-party app stores to be downloaded from the Play Store, but it also has to give those app stores “catalog access” to all the apps currently in the Play Store so they can have a competitive offering.

In October 2024, Google won an administrative stay that put a pause on some of those restrictions pending the results of this Ninth Circuit case. “The stay motion on appeal is denied as moot in light of our decision,” Judge M. Margaret McKeown, who oversaw the case, writes.

[…]

The origin of the Epic v. Google lawsuit was Epic’s decision to circumvent Google’s payment system via a software update to Fortnite. When Google caught wind, it removed Fortnite from the Play Store and Epic sued. Epic pulled a similar gambit with Apple and the App Store, though was far less successful in winning concessions in that case — its major judicial success there has been preventing Apple from collecting fees from developers on purchases made using third-party payment systems.

Source: Google lost its antitrust case with Epic again

The pandemic’s secret aftershock: Inside the gut-brain breakdown

A new international study confirmed a significant post-pandemic rise in disorders of gut-brain interaction, including irritable bowel syndrome (IBS) and functional dyspepsia, according to the paper published in Clinical Gastroenterology and Hepatology.

Building on prior research, investigators used Rome Foundation diagnostic tools to analyze nationally representative samples from both 2017 and 2023 — offering the first direct, population-level comparison of disorders of gut-brain interaction prevalence before and after the COVID-19 pandemic.

Key findings:

  • Overall disorders of gut-brain interaction rose from 38.3% to 42.6%.
  • IBS jumped 28%, from 4.7% to 6%.
  • Functional dyspepsia rose by nearly 44%, from 8.3% to 11.9%.
  • Individuals with long COVID were significantly more likely to have a disorder of gut-brain interaction and reported worse anxiety, depression, and quality of life.

This is the first population-level study to directly compare rates of disorders affecting gut-brain interaction before and after the pandemic, using a consistent methodology. It adds weight to growing calls for updated care models and more research into the gut-brain axis in the post-COVID era.

Source: The pandemic’s secret aftershock: Inside the gut-brain breakdown | ScienceDaily

Nanodevice uses sound to sculpt light, paving the way for better displays and imaging

[…] The findings could have broad implications in fields ranging from computer and virtual reality displays to 3D holographic imagery, optical communications, and even new ultrafast, light-based neural networks.

[…]

The new device is deceptively simple. A thin gold mirror is coated with an ultrathin layer of a rubbery silicone‑based polymer only a few nanometers thick. The research team could fabricate the silicone layer to desired thicknesses—anywhere between 2 and 10 nanometers. For comparison, the wavelength of light is almost 500 nanometers tip to tail.

The researchers then deposit an array of 100‑nanometer gold nanoparticles across the silicone. The nanoparticles float like golden beach balls on an ocean of polymer atop a mirrored sea floor. Light is gathered by the nanoparticles and mirror and focused onto the silicone between—shrinking the light to the nanoscale.

To the side, they attach a special kind of ultrasound speaker—an interdigitated transducer, IDT—that sends high‑frequency rippling across the film at nearly a billion times a second. The high‑frequency sound waves (surface , SAWs) surf along the surface of the gold mirror beneath the nanoparticles. The elastic polymer acts like a spring, stretching and compressing as the nanoparticles bob up and down as the sound waves course by.

The researchers then shine light into the system. The light gets squeezed into the oscillating gaps between the gold nanoparticles and the gold film. The gaps change in size by the mere width of a few atoms, but it is enough to produce an outsized effect on the light.

The size of the gaps determines the color of the light resonating from each nanoparticle. The researchers can control the gaps by modulating the acoustic wave and therefore control the color and intensity of each particle.

“In this narrow gap, the light is squeezed so tightly that even the smallest movement significantly affects it,” Selvin said. “We are controlling the light with lengths on the nanometer scale, where typically millimeters have been required to modulate light acoustically.”

When is shined from the side and the sound wave is turned on, the result is a series of flickering, multicolored against a black background, like stars twinkling in the night sky. Any light that does not strike a nanoparticle is bounced out of the field of view by the mirror, and only the light that is scattered by the particles is directed outward toward the human eye. Thus, the gold mirror appears black and each gold nanoparticle shines like a star.

The degree of optical modulation caught the researchers off guard. “I was rolling on the floor laughing,” Brongersma said of his reaction when Selvin showed him the results of his first experiments.

“I thought it would be a very subtle effect, but I was amazed at how many nanometer changes in distance can change the light scattering properties so dramatically.”

The exceptional tunability, small form factor, and efficiency of the new device could transform any number of commercial fields. One can imagine ultrathin video displays, ultra‑fast based on acousto‑optics’ high‑frequency capabilities, or perhaps new holographic virtual reality headsets that are much smaller than the bulky displays of today, among other applications.

“When we can control the light so effectively and dynamically,” Brongersma said, “we can do everything with light that we could want—holography, beam steering, 3D displays—anything.”

More information: Skyler Peitso Selvin et al, Acoustic wave modulation of gap plasmon cavities, Science (2025). DOI: 10.1126/science.adv1728. www.science.org/doi/10.1126/science.adv1728

Source: Nanodevice uses sound to sculpt light, paving the way for better displays and imaging

Visa and Mastercard Fielding A Ton Of Complaints Over “NSFW” Games Disappearing On Platforms, acting as censors

A week or so ago, Karl Bode wrote about Vice Media’s idiotic decision to disappear several articles that had been written by its Waypoint property concerning Collective Shout. Collective Shout is an Australian group that pretends to be a feminist organization, when, in reality, it operates much more like any number of largely evangelical groups bent on censoring any content that doesn’t align with their own viewpoints (which they insist become your viewpoints as well). The point of Karl’s post was to correctly point out that Collective Shout’s decision to go after the payment processors for the major video game marketplaces over their offering NSFW games shouldn’t be hidden from the public in the interest of clickbait non-journalism.

But that whole thing about Collective Shout putting on a pressure campaign on payment processors is in and of itself a big deal, as is the response to it. Both Steam and itch.io recently either removed or de-indexed a ton of games they’re labeling NSFW, chiefly along guidelines clearly provided by the credit card companies themselves. Now, Collective Shout will tell you that it is mostly interested in going after games that depict vile actions in some ways, such as rape, child abuse, and incest.

No Mercy. That’s the name of the incest-and-rape-focused game that was geo-blocked in Australia this April, following a campaign by the local pressure group Collective Shout. The group, which stands against “the increasing pornification of culture”, then set its sights on a broader target – hundreds of other games they identified as featuring rape, incest, or child sexual abuse on Steam and itch.io. “We approached payment processors because Steam did not respond to us,” said the group of its latest campaign.

The move was effective. Steam began removing sex-related games it deemed to violate the standards of its payment processors, presenting the choice as a tradeoff in a statement to Rock Paper Shotgun: “We are retiring those games from being sold on the Steam Store, because loss of payment methods would prevent customers from being able to purchase other titles and game content on Steam.”

Itch.io followed that up shortly afterwards with its de-indexing plan, but went further and did this with all NSFW games offered on the platform. Unlike Steam, itch.io was forthcoming as to their reasoning for its actions. And they were remarkably simple.

“Our ability to process payments is critical for every creator on our platform,” Corcoran said. “To ensure that we can continue to operate and provide a marketplace for all developers, we must prioritize our relationship with our payment partners and take immediate steps towards compliance.”

Digital marketplaces being unable to collect payment through trusted partners would be, to put it tersely, the end of their business. Those same payment processors can get predictably itchy about partnering with platforms that host content that someone out there, or many someones as part of a coordinated campaign, may not like for fear that will sully their reputation. And because these are private companies we’re talking about, their fear along with any of their own sense of morality are at play here. The end result is a digital world filled with digital marketplaces that all exist under an umbrella of god-like payment processors that can pretty much dictate to those other private entities what can be on offer and what cannot.

And, as an executive from Appcharge chimed in, the processors will hang this all on the amount of fraud and chargebacks that come along with adult content, but that doesn’t change the question about whether payment processors should be neutral on legal but morally questionable content or not. Because, as you would expect, the aims of folks like Collective Shout almost certainly don’t end with things like rape and incest.

It’s possible that Collective Shout’s campaign highlighted a level of operational and reputational risk that payment processors weren’t aware of, and of a severity they didn’t expect. “I’m guessing it’s also the moral element,” Tov-Ly says. “It just makes sense, right? Why would you condone incest or rape promoting games?”

Tov-Ly is of the opinion that payment processors offer a utility, and should have no more role in the moral arbitration of art than your electricity company – meaning, none at all. “Whenever you open that Pandora’s box, you’re not impartial anymore,” he says. “Today it’s rape games and incest, but tomorrow it could be another lobbying group applying pressure on LGBT games in certain countries.”

We’ve already seen this sort of thing when it comes to book and curriculum bans that are currently plaguing far too much of the country. When porn can mean Magic Treehouse, the word loses all meaning.

What is actually happening is that payment processors are feeling what they believe is “public pressure”, but which is actually just a targeted and coordinated campaign from a tiny minority of people who watched V For Vendetta and thought it was an instruction manual. Well, the public has caught wind of this, as have game publishers that might be caught up in this censorship or whatever comes next, and coordinated contact campaigns to payment processors to complain about this new censorship are being conducted.

Gilbert Martinez had just poured himself a glass of water and was pacing his suburban home in San Antonio, Texas while trying to navigate Mastercard’s byzantine customer service hotline. He was calling to complain about recent reports that the company is pressuring online gaming storefronts like Steam and Itch.io to ban certain adult games. He estimates his first call lasted about 18 minutes and ended with him lodging a formal complaint in the wrong department.

Martinez is part of a growing backlash to Steam and Itch.io purging thousands of games from their databases at the behest of payment processing companies. Australia-based anti-porn group Collective Shout claimed credit for the new wave of censorship after inciting a write-in campaign against Visa and Mastercard, which it accused of profiting off “rape, incest, and child sexual abuse game sales.” Some fans of gaming are now mounting reverse campaigns in the hopes of nudging Visa and Mastercard in the opposite directions.

If noise is what is going to make these companies go back to something resembling sanity, this will hopefully do the trick. We’re already seeing examples of games that are being unjustly censored, described as porn when they are very much not. Not to mention instances where nuance is lost and the “porn” content is actually the opposite.

Vile: Exhumed is a textbook example of what critics of the sex game purge always feared: that guidelines aimed at clamping down on pornographic games believed to be encouraging or glorifying sexual violence would inevitably ensnare serious works of art grappling with difficult and uncomfortable subject matter in important ways. Who gets to decide which is which? For a long time, it appeared to be Steam and Itch.io. Last week’s purges revealed it’s actually Visa and Mastercard, and whoever can frighten them the most with bad publicity.

Some industry trade groups have also weighed in. The International Game Developers Association (IGDA) released a statement stating that “censorship like this is materially harmful to game developers” and urging a dialogue between “platforms, payment processors, and industry leaders with developers and advocacy groups.” “We welcome collaboration and transparency,” it wrote. “This issue is not just about adult content. It is about developer rights, artistic freedom, and the sustainability of diverse creative work in games.”

This is the result of a meddling minority attempting to foist their desires on everyone else, plain and simple. Choking the money supply is a smart choice, sure, but one that should be recognized in this case for what it is: censorship based on proclivities that are not widely shared. And if there really is material in these games that is illegal, it should obviously be done away with.

But we should not be playing this game of pretending content that is not widely seen as immoral should somehow be choked of its ability to participate in commerce.

Source: Credit Card Companies Fielding A Ton Of Complaints Over NSFW Games Disappearing On Platforms | Techdirt

Google Is Rolling Out Its AI Age Verification to More Services, and I’m Skeptical

Yesterday, I wrote about how YouTube is now using AI to guess your age. The idea is this: Rather than rely on the age attached to your account, YouTube analyzes your activity on its platform, and makes a determination based on how your activity corresponds to others users. If the AI thinks you’re an adult, you can continue on; if it thinks your behavior aligns with that of a teenage user, it’ll put restrictions and protections on your account.

Now, Google is expanding its AI age verification tools beyond just its video streaming platform, to other Google products as well. As with YouTube, Google is trialing this initial rollout with a small pool of users, and based on its results, will expand the test to more users down the line. But over the next few weeks, your Google Account may be subject to this new AI, whose only goal is to estimate how old you are.

That AI is trained to look for patterns of behavior across Google products associated with users under the age of 18. That includes the categories of information you might be searching for, or the types of videos you watch on YouTube. Google’s a little cagey on the details, but suffice it to say that the AI is likely snooping through most, if not all, of what you use Google and its products for.

Restrictions and protections on teen Google accounts

We do know some of the restrictions and protections Google plans to implement when it detects a user is under 18 years old. As I reported yesterday, that involves turning on YouTube’s Digital Wellbeing tools, such as reminders to stop watching videos, and, if it’s late, encouragements to go to bed. YouTube will also limit repetitive views of certain types of content.

In addition to these changes to YouTube, you’ll also find you can no longer access Timeline in Maps. Timeline saves your Google Maps history, so you can effectively travel back through time and see where you’ve been. It’s a cool feature, but Google restricts access to users 18 years of age or older. So, if the AI detects you’re underage, no Timeline for you.

[…]

Source: Google Is Rolling Out Its AI Age Verification to More Services, and I’m Skeptical

Of course there is no mention of how to ask for recourse if the AI gets it wrong.

Apple throws usual tissy fit at law and now sells iPad Repair Parts for Astronomical Prices

In late May, Apple announced what seemed on its face to be a big, positive development for iPad owners: It was going to begin selling repair parts for iPads to the general public, which is a requirement of a series of new right-to-repair laws. “With today’s announcement, we’re excited to expand our repair services to more customers, enabling them to further extend the life of their products—all without compromising safety, security, or privacy,” Brian Naumann, Apple’s vice president of AppleCare, said in a press release announcing the move.

The announcement was generally covered positively by the press: “Save Money, Make Your iPad Last Longer,” a Forbes headline read, for example. But independent repair professionals who have used the program told 404 Media that the prices Apple is charging for some repair parts are absurdly high, and that this functionally means that the iPad is as unrepairable as it has always been.

“As is typical for Apple, they’ve been pushing and testing the limits as time has gone on, and now they pushed too far. There are plenty of other examples of absurdly priced parts from Self Service, but these iPad parts are by far the worst,” Brian Clark, the owner of the iGuys Tech Shop, told 404 Media.

“For years, Apple effectively considered the iPad non-repairable. They did not offer any repairs on iPads, and Apple authorized service providers were not allowed to do iPad repairs of any kind, so this was a huge shift in their view of iPads. I was excited until the day they actually put the parts up and seeing the ridiculous prices of things, it was really, really disappointing,” Clark added. “It kind of sends the message that they don’t really want iPads to be repaired.

Clark points out that a new charge port for an iPad Pro 11, a part that goes bad all the time, costs $250 from Apple. Aftermarket charge ports, meanwhile, can be found for less than $20. “It’s a very basic part, and I just can’t see any reasonable explanation that part should be $250 from Apple,” he said. “That’s a component that probably costs them a few dollars to make.”

Clark said a digitizer for an iPad A16 is $200. That part can be bought from third-party suppliers for $50, and the iPad A16 sells brand new from Apple for $349, Clark said. The replacement screen assembly for an iPad Pro 13 costs $749 from Apple.

[…]

Source: Apple Is Selling iPad Repair Parts for Astronomical Prices

They have been doing this with people forcing them to open up the app store too – these headlines are from the last year alone, showing them crying and stamping their feet and basically doing everything in their power to childishly stop doing anything that benefits the customers.

Apple Hit with Class-Action Lawsuit for App Store Injunction Violation after Judge rules apple execs lied and willfully ignored injunction – join here

Judge: Apple Lied In Fortnite Case, chose to not comply with court order, must immediately allow external payments without a cut

Apple tries again to make EU officials happy with new fees for in-app purchases

Apple stamps feet but now to let EU developers distribute apps from the web

Apple reverses hissy fit decision to remove Home Screen web apps in EU

Shameless Insult, Malicious Compliance, Junk Fees, Extortion Regime: Industry Reacts To Apple’s Proposed Changes Over Digital Markets Act

Mozilla says Apple’s new browser rules are ‘as painful as possible’ for Firefox

I can have app store? Apple: yes but NO! Give €1,000,000 + lock in to Apple ecosystem. This is how to “comply” with EU anti competition law

After the UK, online age verification is landing in the EU

Denmark, Greece, Spain, France, and Italy are the first to test the technical solution unveiled by the European Commission on July 14, 2025.

The announcement came less than two weeks before the UK enforced mandatory age verification checks on July 25. These have so far sparked concerns about the privacy and security of British users, fueling a spike in usage amongst the best VPN apps.

[…]

The introduction of this technical solution is a key step in implementing children’s online safety rules under the Digital Services Act (DSA).

Lawmakers ensure that this solution seeks to set “a new benchmark for privacy protection” in age verification.

That’s because online services will only receive proof that the user is 18+, without any personal details attached.

Further work on the integration of zero-knowledge proofs is also ongoing, with the full implementation of mandatory checks in the EU expected to be enforced in 2026.

[…]

Starting from Friday, July 25, millions of Britons will need to be ready to prove their age before accessing certain websites or content.

Under the Online Safety Act, sites displaying adult-only content must prevent minors from accessing their services via robust age checks.

Social media, dating apps, and gaming platforms are also expected to verify their users’ age before showing them so-called harmful content.

[…]

The vagueness of what constitutes harmful content, as well as the privacy and security risks linked with some of these age verification methods, have attracted criticism among experts, politicians, and privacy-conscious citizens who fear a negative impact on people’s digital rights.

While the EU approach seems better on paper, it remains to be seen how the age verification scheme will ultimately be enforced.

[…]

Source: After the UK, online age verification is landing in the EU | TechRadar

And so comes the EU spying on our browsing habits, telling us what is and isn’t good for us to see. I can make my own mind up, thank you. How annoying that I will be rate limited to the VPN I get.

Technique to print microscopic colour pixels for tiny microsensors developed

Half a billion years ago, nature evolved a remarkable trick: generating vibrant, shimmering colors via intricate, microscopic structures in feathers, wings and shells that reflect light in precise ways. Now, researchers from Trinity have taken a major step forward in harnessing it for advanced materials science.

A team led by Professor Colm Delaney from Trinity’s School of Chemistry and AMBER, the Research Ireland Center for Advanced Materials and BioEngineering Research, has developed a pioneering method, inspired by nature, to create and program structural colors using a cutting-edge microfabrication technique.

The work could have major implications for environmental sensing, biomedical diagnostics, and photonic materials. The research is published in the journal Advanced Materials.

At the heart of the breakthrough is the precise control of nanosphere self-assembly—a notoriously difficult challenge in . Teodora Faraone, a Ph.D. Candidate at Trinity, used a specialized high-resolution 3D-printing technique to control the order and arrangement of nanospheres, allowing them to interact with light in ways that produce all the colors of the rainbow in a controlled manner.

“This was the central challenge of the ERC project,” said Prof. Delaney, who is en route to Purdue University to present the landmark findings at the MARSS conference on microscale and nanoscale manipulation. “We now have a way to fine-tune nanostructures to reflect brilliant, programmable colors.”

One of the most exciting aspects of the newly developed material is its extreme sensitivity: The structural colors shift in response to minute changes in their environment, which opens up new opportunities for chemical and biological sensing applications.

Microscopic pixels can be fabricated using direct laser writing, demonstrating the ability to achieve wide gamut structural colors, and these can be combined into microscopic works of art, such as in the tiny hummingbird art shown here. Credit: Prof. Colm Delaney

Dr. Jing Qian, a postdoctoral researcher and computational specialist on the team, helped confirm the experimental results through detailed simulations, providing deeper insights into how the nanospheres organize themselves.

The team is already combining the color-programming technique with responsive materials to develop tiny microsensors that change color in real time. These sensors are being developed as part of the IV-Lab Project, a European Innovation Council Pathfinder Challenge led by the Italian Institute of Technology, with a key goal being the development of implantable devices capable of tracking biochemical changes inside the human body.

Source: Programmable nanospheres unlock nature’s 500-million-year-old color secrets

Storing PNG image data in a bird’s song

Birds have a strong ability to learn and mimic sounds. So, Benn Jordan converted a PNG image into a spectrogram and then played the resulting sound to a starling, a bird known for its mimicking. The starling was able to copy the sound, thus demonstrating an ability to store data in its song.

Let’s just store all of our data in songs.

Source: Storing PNG image data in a bird’s song – FlowingData

Palo Alto Networks inks $25b deal to buy human and machine identity manager CyberArk

Palo Alto Networks will buy Israeli security biz CyberArk in a $25 billion cash-and-stock deal confirmed today.

It’s Palo Alto Networks’ largest purchase to date, and one of the most expensive acquisitions this year coming in behind Google paying $32 billion for cloud security upstart Wiz in March.

CyberArk provides identity security and privileged access management tools, which have become increasingly important to enterprises who need to not only verify and secure human identities, but also machines and AIs.

“Today, the rise of AI and the explosion of machine identities have made it clear that the future of security must be built on the vision that every identity requires the right level of privilege controls,” Palo Alto Networks CEO Nikesh Arora said in a statement announcing the purchase.

Machine identities outnumber those of humans by 40 to one, according to CyberArk, and this number is expected to skyrocket as more companies use AI agents.

[…]

Under the terms of the deal, CyberArk investors will receive $45 in cash and 2.2005 shares of Palo Alto Networks common stock for each CyberArk share they own. The transaction is expected to close in the second half of Palo Alto Networks’ fiscal 2026.

Source: Palo Alto Networks inks $25b deal to buy CyberArk • The Register

Google Home Is So Bad That a Lawsuit Could Be on Its Way

There’s been some trouble at home lately. Not your home, hopefully, but if you live in Google HQ, then maybe. Last week, people using the Google Home app flooded Reddit with complaints over smart home products that mysteriously stopped working—lights, cameras, smart plugs, you name it. Those complaints were so numerous, in fact, that Google even bothered to address them and do better. Things in the Googleverse were (or are) bad, to say the least. But just because they’re bad right now doesn’t mean they can’t get worse—and worse they may still get. For Google, that is.

As it turns out, Google’s overtures about fixing its smart home app and doing better may not be enough for people, and all of that pushback may actually result in a good, old-fashioned class-action lawsuit.

“Kaplan Gore has begun investigating a possible class action against Google LLC for failing to remedy increasing problems with its Google Home ‘smart home’ service,” the law firm Kaplan Gore said in a statement. “Unfortunately, many users have reported functionality issues with Google Home and associated Google and/or Nest devices, resulting in commands not being recognized or properly executed. Users are reporting that they are experiencing these issues despite their devices having previously functioned normally and despite having a stable internet connection.”

Kaplan Gore also has a form for any users experiencing those issues and is asking them to fill out some information and join a class action.

[…]

According to loads of complaints on Reddit, Google Home has been so broken that some users have reported being unable to even turn their smart lights on and off properly. And it’s not just lights; all kinds of smart devices have been swept up, including other speakers and even (disconcertingly) cameras and smart doorbells. If you’re experiencing similar issues, by the way, you can try pulling open the Google Home app and tapping Settings in the bottom-right corner, then tap “Works with Google,” and a list of your synced apps should show up. If they’re no longer synced, re-sync the app by finding it under “Add new.” If they’re still synced and not working, unsync the app by tapping on the icon and then tapping “Unlink account.” After that, you can try syncing once more and hope that it works.

[…]

Source: Google Home Is So Bad That a Lawsuit Could Be on Its Way

Gamers Flood Credit Card Hotlines Demanding End To Censorship in games – this won’t just blow over

[…] Martinez is part of a growing backlash to Steam and Itch.io purging thousands of games from their databases at the behest of payment processing companies. Australia-based anti-porn group Collective Shout claimed credit for the new wave of censorship after inciting a write-in campaign against Visa and Mastercard, which it accused of profiting off “rape, incest, and child sexual abuse game sales.” Some fans of gaming are now mounting reverse campaigns in the hopes of nudging Visa and Mastercard in the opposite directions.

A screenshot shows an email sent to Collective Shout.
Screenshot: Bluesky / Kotaku

“Seeing the rise of censorship and claiming it’s to ‘protect kids,’ it sounds almost like the Satanic Panic, targeting people that have done nothing to anyone except having fun,” Martinez told Kotaku. “We’re already seeing the negative effect this has on people’s personal and financial lives because of such unnecessary restrictions. If parents are so concerned over protecting kids, then they should parent their own kids instead of forcing other people to meet their ridiculous demands.”

Indie horror game Vile: Exhumed is one of the titles that’s been banned from Steam by Valve. Released last year by Cara Cadaver of Final Girl Games, it has players rummage through a fictional ‘90s computer terminal to uncover a twisted man’s toxic obsession with an adult horror film actress, using this format to engage with themes of online misogyny and toxic parasocial relationships. “It was banned for ‘sexual content with depictions of real people,’ which, if you have played it, you know is all implied, making this all feel even worse,” Cadaver wrote on Bluesky on July 28.

Valve did not immediately respond to a request for comment.

Vile: Exhumed is a textbook example of what critics of the sex game purge always feared: that guidelines aimed at clamping down on pornographic games believed to be encouraging or glorifying sexual violence would inevitably ensnare serious works of art grappling with difficult and uncomfortable subject matter in important ways. Who gets to decide which is which? For a long time, it appeared to be Steam and Itch.io. Last week’s purges revealed it’s actually Visa and Mastercard, and whoever can frighten them the most with bad publicity.

VILE: Exhumed | Announcement Trailer

“Things are definitely changing as reports of responses to calls have gone from ‘Sorry what are you talking about?’ to then ‘Are you ALSO calling about itch/steam’ to now some [callers] receiving outright harassment,” a 2D artist who goes by Void and who has helped organize a Discord for a reverse call-in campaign told Kotaku. It’s hard to have any clear sense of the scope of these counter-initiatives or what ultimate impact they might have on the companies in question, but anecdotally the effort seems to be gaining traction. For instance, callers are now needing to spend less time explaining what Steam, Itch.io, or “NSFW” games are to the people on the other end of the line.

“For calls I was originally focusing on Mastercard, but I ended up getting a lot of time out of Visa,” Bluesky user RJAIN told Kotaku. “Two days ago I had a call with Visa that lasted over an hour, and a follow-up call later on that lasted over 2.5 hours. Those calls, I spoke with a supervisor and they seemed very calm and understanding. Yesterday, the calls were different. The reps seemed angry and exhausted. They refused to let me speak to a supervisor and kept insisting that it is now protocol for them to disconnect the call on anyone complaining about this issue.”

[…]

Some industry trade groups have also weighed in. The International Game Developers Association (IGDA) released a statement stating that “censorship like this is materially harmful to game developers” and urging a dialogue between “platforms, payment processors, and industry leaders with developers and advocacy groups.” “We welcome collaboration and transparency,” it wrote. “This issue is not just about adult content. It is about developer rights, artistic freedom, and the sustainability of diverse creative work in games.”

For the time being, that dialogue appears to mostly be taking place at Visa’s and Mastercard’s call centers, at least when they allow it.

Source: Gamers Flood Credit Card Hotlines Demanding End To Censorship

Echolon Exercise Bikes Lose Features, must phone home to work at all after Firmware Update

[…] It seems like a simple concept that everyone should be able to agree to: if I buy a product from you that does x, y, and z, you don’t get to remove x, y, or z remotely after I’ve made that purchase. How we’ve gotten to a place where companies can simply remove, or paywall, product features without recourse for the customer they essentially bait and switched is beyond me.

But it keeps happening. The most recent example of this is with Echelon exercise bikes. Those bikes previously shipped to paying customers with all kinds of features for ride metrics and connections to third-party apps and services without anything further needed from the user. That all changed recently when a firmware update suddenly forced an internet connection and a subscription to a paid app to make any of that work.

As explained in a Tuesday blog post by Roberto Viola, who develops the “QZ (qdomyos-zwift)” app that connects Echelon machines to third-party fitness platforms, like Peloton, Strava, and Apple HealthKit, the firmware update forces Echelon machines to connect to Echelon’s servers in order to work properly. A user online reported that as a result of updating his machine, it is no longer syncing with apps like QZ, and he is unable to view his machine’s exercise metrics in the Echelon app without an Internet connection.

Affected Echelon machines reportedly only have full functionality, including the ability to share real-time metrics, if a user has the Echelon app active and if the machine is able to reach Echelon’s servers.

Want to know how fast you’re going on the bike you’re sitting upon? That requires an internet connection. Want to get a sense of how you performed on your ride on the bike? That requires an internet connection. And if Echelon were to go out of business? Then your bike just no longer works beyond the basic function of pedaling it.

And the ability to use third-party apps is reportedly just, well, gone.

For some owners of Echelon equipment, QZ, which is currently rated as the No. 9 sports app on Apple’s App Store, has been central to their workouts. QZ connects the equipment to platforms like Zwift, which shows people virtual, scenic worlds while they’re exercising. It has also enabled new features for some machines, like automatic resistance adjustments. Because of this, Viola argued in his blog that QZ has “helped companies grow.”

“A large reason I got the [E]chelon was because of your app and I have put thousands of miles on the bike since 2021,” a Reddit user told the developer on the social media platform on Wednesday.

Instead of happily accepting that someone out there is making its product more attractive and valuable, Echelon is instead going for some combination of overt control and the desire for customer data. Data which will be used, of course, for marketing purposes.

There’s also value in customer data. Getting more customers to exercise with its app means Echelon may gather more data for things like feature development and marketing.

What you won’t hear anywhere, at least that I can find, is any discussion of the ability to return or get refunds for customers who bought these bikes when they did things that they no longer will do after the fact. That’s about as clear a bait and switch type of a scenario as you’re likely to find.

Unfortunately, with the FTC’s Bureau of Consumer Protection being run by just another Federalist Society imp, it’s unlikely that anything material will be done to stop this sort of thing.

Source: Exercise Bike Company Yanks Features Away From Purchased Bikes Via Firmware Update | Techdirt

Visa and Mastercard are getting overwhelmed by censorship gamer fury

In the wake of storefronts like Steam and itch.io curbing the sale of adult games, irate fans have started an organized campaign against the payment processors that they believe are responsible for the crackdown. While the movement is still in its early stages, people are mobilizing with an eye toward overwhelming communication lines at companies like Visa and Mastercard in a way that will make the concern impossible to ignore.

On social media sites like Reddit and Bluesky, people are urging one another to get into contact with Visa and Mastercard through emails and phone calls. Visa and Mastercard have become the targets of interest because the affected storefronts both say that their decisions around adult games were motivated by the danger of losing the ability to use major payment processors while selling games. These payment processors have their own rules regarding usage, but they are vaguely defined. But losing infrastructure like this could impact audiences well beyond those who care about sex games, spokespeople for Valve and itch.io said.

In a now-deleted post on the Steam subreddit with over 17,000 upvotes, commenters say that customer service representatives for both payment processors seem to already be aware of the problem. Sometimes, the representatives will say that they’ve gotten multiple calls on the subject of adult game censorship, but that they can’t really do anything about it.

The folks applying pressure know that someone at a call center has limited power in a scenario like this one; typically, agents are equipped to handle standard customer issues like payment fraud or credit card loss. But the point isn’t to enact change through a specific phone call: It’s to cause enough disruption that the ruckus theoretically starts costing payment processors money.

“Emails can be ignored, but a very very long queue making it near impossible for other clients to get in will help a lot as well,” reads the top comment on the Reddit thread. In that same thread, people say that they’re hanging onto the call even if the operator says that they’ll experience multi-hour wait times presumably caused by similar calls gunking up the lines. Beyond the stubbornness factor, the tactic is motivated by the knowledge that most customer service systems will put people who opt for call-backs in a lower priority queue, as anyone who opts in likely doesn’t have an emergency going on.

Artwork from the erotic game Forbidden Fantasy, featuring a purple-haired elf character shushing the camera
Image: OppaiMan

“Do both,” one commenter suggests. “Get the call back, to gum up the call back queue. Then call in again and wait to gum up the live queue.”

People are also using email to voice their concerns directly to the executives at both Visa and Mastercard, payment processors that activist group Collective Shout called out by name in their open letter requesting that adult games get pulled. Emails are also getting sent to customer service. In light of the coordinated effort, many people are getting a pre-written response that reads:

Thank you for reaching out and sharing your perspective. As a global company, we follow the laws and regulations everywhere we do business. While we explicitly prohibit illegal activity on our network, we are equally committed to protecting legal commerce. If a transaction is legal, our policy is to process the transaction. We do not make moral judgments on legal purchases made by consumers. Visa does not moderate content sold by merchants, nor do we have visibility into the specific goods or services sold when we process a transaction. When a legally operating merchant faces an elevated risk of illegal activity, we require enhanced safeguards for the banks supporting those merchants. For more information on Visa’s policies, please visit our network integrity page on Visa.com. Thank you for writing.

On platforms like Bluesky, resources are being shared to help people know who to contact and how, including possible scripts for talking to representatives or sending emails. A website has been set up with the explicit purpose of arming concerned onlookers with the tools and knowledge necessary to do their part in the campaign.

Through it all, gamers are telling one another to remain cordial during any interactions with payment processors, especially when dealing with low-level workers who are just trying to do their job. For executives, the purpose of maintaining a considerate tone is to help the people in power take the issue seriously.

The strategy is impressive in its depth and breadth of execution. While some charge in with an activist bent, others say that they’re pretending to be confused customers who want to know why they can’t use Visa or Mastercard to buy their favorite games.

Meanwhile, Collective Shout — the organization who originally complained to Steam, Visa, and Mastercard about adult games featuring non-consensual violence against women — has also recently put out a statement of its own alongside a timeline of events.

“We raised our objection to rape and incest games on Steam for months, and they ignored us for months,” reads a blog post from Collective Shout. “We approached payment processors because Steam did not respond to us.”

Collective Shout claims that it only petitioned itch.io to pull games with sexualized violence or torture against women, but allegedly, the storefront made its own decision to censor NSFW content sitewide. At current, itch.io has deindexed games with adult themes, meaning that these games are not viewable on their search pages. The indie storefront is still in the middle of figuring out and outlining its rules for adult content on the website, but the net has been cast so wide that some games with LGBT themes are being impacted as well.

In another popular Reddit thread, users say that customer service representatives are shifting from confusion to reiterating that their concerns are being “heard.”

“I will be calling them again in a few to days to see if there is any progress on changing the situation,” says the original poster.

Perhaps a different comment in that thread summarizes the ordeal best: “There’s really only 2 things that can unite Gamers: hate campaigns and gooning.”

Source: Visa and Mastercard are getting overwhelmed by censorship gamer fury | Polygon

Automata Dev Warns That Letting Credit Card Companies Censor Internet is an attack on Democracy

As a fight with credit card companies over adult games leads to renewed concerns about censorship on Steam and even on indie platforms like itch.io, a recent warning by Nier: Automata director Yoko Taro calling censorship a “security hole that endangers democracy itself” has become relevant again.

The comments came last November when the Manga Library Z online repository for digital downloads of out-of-print manga was forced to shut down. The group blamed international credit card companies, presumably Visa and Mastercard, who wanted the site to censor certain words from its copies of adult manga.

“Publishing and similar fields have always faced regulations that go beyond the law, but the fact that a payment processor, which is involved in the entire infrastructure of content distribution, can do such things at its own discretion seems to me to be dangerous on a whole new level,” Taro wrote in a thread at the time, according to a translation by Automaton.

He contionued:

It implies that by controlling payment processing companies, you can even censor another country’s free speech. I feel like it’s not just a matter of censoring adult content or jeopardizing freedom of expression, but rather a security hole that endangers democracy itself.

Manga Library Z was eventually able to come back online thanks to a crowdfunding campaign earlier this year, but now video game developers behind adult games with controversial themes are facing similar issues on Steam and itch.io due to recent boycott campaigns. Some artists and fans have been organizing reverse boycotts calling for Visa, Mastercard, and others to end their “moral panic.” One such petition has nearly 100,000 signatures so far.

“Some of the games that have been caught up in the last day’s changes on Itch are games that up-and-coming creators have made about their own experiences in abusive relationships, or dealing with trauma, or coming out of the closet and finding their first romance as an LGBTQ person,” NYU Game Center chair Naomi Clark told 404 Media this week. She mentioned Jenny Jiao Hsia’s autobiographical Consume Me as one example of the type of work that could be censored under the platform’s shifting definitions of what’s acceptable

[…]

Source: Nier: Automata Dev Warned About Credit Card Company Censorship

UK’s Stupid and Dangerous New Age Verification Requirement Thwarted in the Simplest Ways Imaginable

TL;DR – use a VPN or take a picture of yourself in Death Stranding

Earlier this week, the United Kingdom’s age assurance requirement for sites that publish pornographic material went into effect, which has resulted in everything from Pornhub to Reddit and Discord displaying an age verification panel when users attempt to visit. There’s just one little problem. As The Verge notes, all it takes to defeat the age-gating is a VPN, and those aren’t hard to come by these days.

Here’s the deal: Ofcom, the UK’s telecom regulator, requires online platforms to verify the age of their users if they are accessing a site that either publishes or allows users to publish pornographic material. Previously, a simple click of an “I am over 18” button would get you in. Now, platforms are mandated to use a verification method that is “strong” and “highly effective.” A few of those acceptable methods include verifying with a credit card, uploading a photo ID, or submitting to a “facial age estimation” in which you upload a selfie so a machine can determine if you look old enough to pleasure yourself responsibly.

Those options vary from annoying to creepily intrusive, but there’s a little hitch in the plan: Currently, most platforms are determining a user’s location based on IP address. If you have an IP that places you in the UK, you have to verify. But if you don’t, you’re free to browse without interruption. And all you need to change your IP address is a VPN.

Ofcom seems aware of this very simple workaround. According to the BBC, the regulator has rules that make it illegal for platforms to host, share, or allow content that encourages people to use a VPN to bypass the age authentication page. It also encouraged parents to block or control VPN usage by their children to keep them from dodging the age checkers.

It seems that people are aware of this option. Google Trends shows that searches for the term “VPN” have skyrocketed in the UK since the age verification requirement went into effect.

[…]

But the thing about Ofcom’s implementation here is that it’s not just blocking kids from seeing harmful material—it’s exposing everyone to invasive, privacy-violating risks. When the methods for accomplishing the stated goal require people to reveal sensitive data, including their financial information, or give up pictures of their face to be scanned and processed by AI, it’s kinda hard to blame anyone for just wanting to avoid that entirely. Whether they’re horny teens trying to skirt the system or adults, getting a face scan before opening Pornhub kinda kills the mood.

Source: UK’s New Age Verification Requirement Thwarted in the Simplest Way Imaginable

An X user named Dany Sterkhov appears to be the first to discover the hack. On July 25, he posted that he had bypassed Discord’s age verification check using the photo mode in the video game Death Stranding.

[…]

The Verge and PCGamer have both tried Sterkhov’s hack themselves and confirmed it works.

Most of these companies rely on third-party platforms to handle age verification. These services typically give users the option to upload a government-issued photo ID or submit photos of themselves.

Discord uses a platform called k-ID for age verification. According to The Verge’s Tom Warren, all he had to do to pass the check was point his phone’s camera at his monitor to scan the face of Sam Bridges, the protagonist of Death Stranding, using the game’s photo mode. The system did ask him to open and close his mouth—something that is easy enough to do in the game.

Warren was also able to bypass Reddit’s age check, which is handled by Persona, using the same method. However, the trick didn’t work with Bluesky’s system, which uses Yoti for age verification.

[…]

ProtonVPN reported on X that it saw an over 1,400 percent increase in sign-ups in the U.K. after the age verification requirements took effect. VPNs let people browse the web as if they were in a different location, making it easier to bypass the U.K.’s age checks.

In the U.S., laws requiring similar age verification systems for porn sites have passed in nearly half the states. Nine states in the U.S. have also passed laws requiring parental consent or age verification for social media platforms.

Source: ‘Death Stranding’ Is Helping UK Users Bypass Age Verification Laws

The problem is that besides being unenforceable you are leaving a lot of very personal data inside the age verifiers databases. These databases are clear targets and will get hacked.