Wind Power Is Taking Off In China– All The Way To 2000 M AGL

The S2000 at a much lower altitude than 2000 m.

2000 m above ground level (AGL), winds are stronger and much, much more consistent than they are at surface. Even if the Earth were a perfect sphere, there’d be a sluggish boundry layer at the surface, but since it’s got all these interesting bumps and bits and bobs, it’s not just sluggish but horribly turbulent, too. Getting above that, as much as possible, is why wind turbines are on big towers. Rather than build really big tower, Beijing Lanyi Yunchuan Energy Technology Co. has gone for a more ambitious approach: an aerostat to take power from the steady winds found at high altitude. Ambitiously called the Stratosphere Airborne Wind Energy System (SAWES), the megawatt-scale prototype has recently begun feeding into the grid in Yibin, Sichuan Province.

The name might be a bit ambitious, since its 2000 m test flight is only one tenth of the way to the stratosphere, but Yibin isn’t a bad choice for testing: as it is well inland, the S2000 prototype won’t have to contend with typhoons or other ocean storms. The prototype is arguably as ambitious as the name: its 12 flying turbines have a peak capacity of three megawatts. True, there are larger turbines in wind farms right now, but at 60 m in length and 40 m in diameter, the S2000 has a lot of room to grow before hitting any kind of limit or even record for aerostats. We’re particularly interested in the double-hull construction– it would seem the ring of the outer gas bag would do a good job funneling and accelerating air into those turbines, but we’d love to see some wind tunnel testing or even CFD renderings of what’s going on in there.

A rear view shows the 12 turbines inside the double hull. It should guide air into the gap, but we wonder how much turbulence the trusses in there are making.

During its first test flight in January 2026, the system generated generated 385 kilowatt-hours of electricity over the course of 30 minutes. That means it averaged about 25% capacity for the test, which is a good safe start. Doubtless the engineers have a full suite of test flights planned to demonstrate the endurance and power production capabilities of this prototype. Longer flights at higher capacity may have already happened by the time you read this.

Flying wind turbines isn’t a new idea by any means; a few years ago we featured this homemade kite generator, and the pros have been in on it too. Using helium instead represents an interesting design choice–on the plus side, its probably easier to control, and obviously allowing large structures, but the downside is the added cost of the gas. It will be interesting to see how it develops.

We’re willing to bet it catches on faster than harvesting wind energy from trees.

All images from Beijing Lanyi Yunchuan Energy Technology Co., Ltd.

Source: Wind Power Is Taking Off In China– All The Way To 2000 M AGL | Hackaday

Microsoft starts to offer Local offerings of their Azure and 365 Cloud Products thanks to the EU

Quote

Azure Local can now run fully disconnected with no cloud connectivity, Microsoft confirmed at the London leg of its AI tour.

The latest change comes amid heightened trade and geopolitical tensions between the US administration and Europe, with more customers in the trading bloc seeking reassurances about digital sovereignty.

Like rival US hyperscalers, Microsoft has rolled out initiatives in Europe in a bid to address jittery locals worried about the possibility – no matter how remote – of service interruption or their data being accessed by American officials under the US CLOUD Act.

In March, Microsoft completed its EU Data Boundary service, then added more features in November. Yet for a growing number of organizations in Europe, only infrastructure under their direct control will do.

Azure Local (formerly Azure Stack HCI) is Microsoft’s answer to those concerns. Using specialized hardware, Azure Local lets customers run workloads on-premises. However, it still needed to phone home occasionally – its management via Azure Arc ran in the cloud, and pulling the plug for more than 30 days resulted in reduced functionality.

By making disconnected operations available in Azure Local, organizations can “now run mission-critical infrastructure with Azure governance and policy control, with no cloud connectivity, optimizing continuity for sovereign, classified or isolated environments,” Microsoft said this week.

In other words, no more calling back to the mothership.

Microsoft has also made Microsoft 365 Local available (think Exchange Server, SharePoint Server, and Skype for Business Server) and announced Foundry Local (only available to “qualified customers”).

“This brings the richness of Microsoft’s enterprise AI capabilities to on-premises systems, complete with local inferencing and APIs that operate completely within customer-controlled data boundaries,” Microsoft said.

Microsoft’s sovereignty claims may ring hollow for some after it admitted in France last year that it could not guarantee sovereignty if it were compelled to hand data to the US government. The ability to completely pull the plug is therefore intended to reassure customers, even if the software remains proprietary and supplied by a US tech giant.

[…]

“Sovereignty is increasingly a requirement, and we welcome any new services, tools, and software that can run in European Cloud Infrastructure Services Providers’ datacenters and on their own platforms. We look forward to testing these products against our forthcoming CISPE Sovereign Cloud Services Framework to see if they qualify for a Sovereign Badge or a Resilient Badge.”

[…]

Microsoft is not the only tech giant concerned about sovereignty. Amazon Web Services made its European Sovereign Cloud generally available earlier this year, and Google is selling customers a variety of solutions, including Google Cloud Airgapped, which runs on servers fully disconnected from the internet.

Whether these efforts satisfy customers will hinge on implementation and on how sovereignty is defined. Being able to disconnect completely will satisfy some, though others may still worry that the software remains under Microsoft’s control.

Enterprises in Europe looking to local tech providers to run their entire stack were given an example of how to do it last week. Plug-and-play it is not, but the rewards are obvious.

Source: Worried Europeans can now cut Azure’s phone cord completely • The Register

Prediction Market Kalshi accuses 2 of insider trading: MrBeast editor and Republican candidate

Quote

An editor who works for YouTube’s biggest creator, MrBeast, has been suspended from the prediction market platform Kalshi and reported to federal regulators for insider trading, Kalshi officials said on Wednesday. It’s the first time the company has publicly revealed the results of an investigation into market manipulation on the popular app.

The MrBeast employee, who Kalshi identified as Artem Kaptur in regulatory filings, traded around $4,000 on markets related to the streamer, the company said.

But Kalshi investigators say Kaptur was using proximity to the streamer as a way of trying to make quick cash. Using confidential information to manipulate markets is prohibited by Kalshi’s rules and could violate federal law.

“We investigated and found that the trader was employed as an editor for the streamer’s show and likely had access to material non-public information connected to his trading,” said Robert DeNault, the company’s head of enforcement.

Kalshi said the company froze the account in question, so Kaptur was not able to withdraw any profits. He was fined $20,000 and suspended from the platform for two years. Kalshi also said the case was reported to regulators at the Commodity Futures Trading Commission, or CFTC, which oversees prediction markets like Kalshi.

[…]

Another trading case involved a former political candidate

Kalshi also unveiled a case against a former longshot Republican candidate in the California governor’s race, Kyle Langford, who posted on X in May that he bet on himself to win the statewide contest. He encouraged others to do the same.

While it appeared to be a social media stunt, it was also a violation of Kalshi’s rules, and regulators said potentially a federal crime.

In a legal notice made public Wednesday, officials at Kalshi said that as a candidate, Langford was “a direct decision maker” for the market on the state’s governor’s race, prohibiting him from betting under internal guidelines against insider trading and market manipulation.

Kalshi banned Langford for five years from its platform and handed him a $2,200 fine.

“As a candidate in a race, you can (and probably should) follow and use Kalshi’s market forecast, but you should not trade on it,” Kalshi’s DeNault said.

[…]

Online prediction market platforms, such as Polymarket and Kalshi, have seen a surge in popularity during Trump’s second term. People can place bets on these platforms on wide-ranging issues such as what words people say at events, the outcome of elections or how much snow will fall in New York City.

The explosive growth of the industry is in part driven by the use of what observers many consider a legal loophole, which the Trump administration supports.

Instead of falling under the purview of state gambling laws, prediction markets are regulated in a more obscure way, as a type of “futures contract,” overseen by CFTC, which typically regulates bets on the future production of things like soybeans, corn and crude oil.

The Biden administration fought prediction market apps from listing most types of contracts. It argued there was little public interest value in most of them, not to mention that they invite speculators to manipulate markets through insider trading.

[…]

Until recently, regulators had allowed a few dozen markets a year for futures trading. Now, there are more than 200,000 active prediction markets.

Prediction markets stirring increased insider trading fears

The burgeoning and controversial industry has run headlong into global affairs. In January, a trader made $400,000 in profit on Polymarket by placing a successful bet on the capture of the Venezuelan leader Nicolás Maduro before there was any public indication that would happen.

Earlier this month, Israeli authorities arrested several people and charged two on suspicion of using classified information to place bets about upcoming military operations in Iran on Polymarket.

Insider trading on Polymarket and Kalshi is prohibited by each platform’s rules, and is illegal under federal law, but experts say each company’s internal systems can only catch so much insider activity, which can take place by word of mouth or other means outside the prediction market apps.

Still, Kalshi says in the past year it has opened 200 investigations into insider trading, 12 of which are still ongoing.

[…]

Source: Kalshi accuses MrBeast editor of insider trading : NPR

The Two Key Villains of 2022’s Crypto Crash are Trying to Rewrite History

Quote

The crypto bubble that inflated through 2021 burst in 2022 with two defining failures.

In May, Terraform Labs’ algorithmic stablecoin UST lost its $1 peg, eventually leading to hyperinflation of the system’s underlying crypto collateral and wiping out an estimated $40 billion in crypto market value. The contagion triggered bankruptcies at a variety of crypto institutions, including Voyager Digital and BlockFi.

Months later, in November, crypto exchange giant FTX halted withdrawals and filed for bankruptcy. Customer funds had allegedly been diverted without consent to cover losses at sister trading firm Alameda Research, fund real estate, political donations, and other unapproved uses. The amount of money that was diverted is somewhat disputed, but what’s clear is that customers were unable to receive requested crypto withdrawals. Bitcoin bottomed below $20,000 amid the broader deleveraging, and reports later pointed to ties between the two crypto disasters.

Justice delivered partial accountability. Do Kwon, Terraform Labs co-founder, pleaded guilty to fraud and manipulation charges tied to misleading investors about UST’s stability. He received a 15-year prison sentence this past December, with victims testifying to the widespread destruction. Sam Bankman-Fried was convicted on seven counts, including wire fraud, securities fraud, and money laundering for the FTX misconduct. A judge sentenced him to 25 years in March 2024 and ordered $11 billion in forfeiture.

Both Bankman-Fried and lawyers associated with Terraform Labs are now working to recast their respective roles in the collapses.

Was FTX Actually Insolvent?

From prison, Bankman-Fried has posted on X claiming FTX was never technically insolvent. In a recent “10 Myths About Me & FTX” thread, he states the platform held more assets than liabilities, could have repaid customers in kind, and is now delivering 119-143% recoveries. He blames bankruptcy professionals for rushing a Chapter 11 filing, charging over $1 billion in fees, and dismantling the estate instead of allowing an orderly wind-down.

Most crypto industry insiders, where Bankman-Fried is viewed as the ultimate villain, dismiss this general argument. If assets were truly sufficient, withdrawals would not have been frozen. New York University Stern School of Business Adjunct Professor Austin Campbell noted that solvency for a crypto exchange means holding customer assets in the exact form and availability they expect, adding, “FTX did not have that. They were insolvent.” Galaxy Head of Firmwide Research Alex Thorn added that diverting deposits into illiquid bets against customers’ wishes amounts to theft, making the platform insolvent the moment redemptions failed.

The bankruptcy process may indeed have carried its own inefficiencies, with creditors flagging excessive legal fees that neared $1 billion and rushed asset sales. However, at the end of the day, misusing customer deposits without approval was still the original sin.

Bankman-Fried has also used his public posts to court a pardon from President Trump. The White House told Fortune this week that no pardon is in the works or planned.

Terraform Labs Blames Insider Traders Instead of Their Broken Stablecoin Model

In the matter of the other major collapse of 2022, Terraform Labs’ liquidation administrator is now suing trading firm Jane Street, alleging insider trading accelerated the UST depeg and LUNA disaster. However, while opportunistic or informed trading may have occurred as the run began, the fundamental issue was the broken stablecoin design. As the pseudonymous crypto advisor and strategist Hasu put it:

Let’s be extremely clear. UST failed because it was a ponzi scheme. It was a criminal enterprise that lured depositors with promise of high yield, paid from the deposits of new entrants. There is no possible universe where it didn’t go broke.

According to the new complaint, Jane Street allegedly obtained non-public information from Terraform insiders through private communication channels established by its employee and former Terraform member Bryce Pratt, who maintained contact with former colleagues, including a software engineer and the head of business development. A specific allegation involves May 7, 2022, when Terraform Labs withdrew 150 million UST from the Curve3pool without any public announcement; within 10 minutes, a wallet linked to Jane Street withdrew an additional 85 million UST from the same pool.

Bitcoin eventually recovered from the 2022 lows and reached new all-time highs near $125,000 in October 2025. But the rest of the crypto market has not followed suit as strongly as in past cycles, where altcoins have routinely outperformed bitcoin by wide margins during bull runs. For example, Ethereum, which was heavily marketed last cycle for DeFi dominance and its shift toward “ultrasound money,” currently trades far lower against bitcoin when compared to previous cycles, underscoring a growing divide between bitcoin and more speculative blockchain use cases.

A few crypto names have outperformed recently, but most exhibit heavy centralization in their associated tech stacks, reliance on centralized stablecoins, or both. Indeed, conversation around non-Bitcoin crypto increasingly centers on stablecoins, which in many ways operate more like centralized fintech products than open protocols. Earlier this week, it was revealed that Meta plans to implement stablecoin integration in their products later this year. Notably, the company previously attempted to create its own digital currency back in 2019 before regulators applied pressure and slowed things down.

Bitcoin has faced its own pressure recently, dropping roughly 50% from the October peak. The drop began with an October 10th deleveraging event driven more by smaller altcoins than bitcoin itself, echoing the post-Terra unwind, according to CNBC. Narratives questioning bitcoin’s “digital gold” status have also resurfaced as physical gold outperformed amid geopolitical strains, including tensions over Greenland. That said, Bitcoin encountered similar doubts after its March 2020 crash at the start of COVID before eventually experiencing another boom during the pandemic.

Source: The Two Key Villains of 2022’s Crypto Crash are Trying to Rewrite History

Same Poop, Different Results: At-Home Gut Health Tests Are Wildly Inconsistent

Quote

The bacteria that live inside our digestive tract undoubtedly play a vital part in our health. But buyer beware of companies that claim to have deciphered the gut microbiome. Research out today shows that no two at-home tests will tell you the same thing.

Government scientists sent standardized fecal samples to seven different gut health testing companies. The companies returned results that varied from one another, sometimes dramatically, while one company’s tests couldn’t conclusively decide if the same samples belonged to a healthy microbiome or not. The findings indicate that customers shouldn’t put too much stock in these tests, at least right now, the researchers say.

“Our results demonstrate the need for standards to ensure analytical validity and consumer confidence,” the authors wrote in their paper, published Thursday in Communications Biology.

Not quite there yet

Exciting as the field of gut health is, it’s very much in its infancy. We’re still not quite sure exactly what makes for a healthy mix of bacteria in our guts, much less how to reliably fix an unhealthy microbiome (it’s likely there are many different combinations of bacteria that could be “healthy”). And we’re still trying to untangle the complex interactions between our gut bacteria and various health conditions.

This uncertainty hasn’t stopped several companies from entering the direct-to-consumer industry, however. While some may be cautious in their advertising, others have claimed their tests can tell whether a person’s microbiome is healthy, and they might even sell products that will supposedly restore a dysfunctional one. Many scientists have already called for tighter regulation of these tests. Researchers at the National Institute of Standards and Technology, a division of the U.S. Department of Commerce, and others sought to gauge the reliability of these tests across different companies.

[…]

Source: Same Poop, Different Results: At-Home Gut Health Tests Are Wildly Inconsistent, Study Finds

Open Source Endowment aims to raise big pile of money

Quote

Open source projects, ever short of funding, have a potential new source of revenue in the form of the Open Source Endowment (OSE).

The organization describes itself as “the world’s first endowment fund for open source software.”

There are certainly other organizations that help fund open source software, such as Open Collective, Open Source Collective, and the Rust Foundation’s Maintainers Fund, not to mention organizations like the Software Freedom Conservancy, which provides legal and infrastructure support to open source projects. Open source developers may also be fortunate enough to receive contributions from individuals, companies (when not passing the buck), and government-sponsored initiatives like Germany’s Sovereign Tech Fund.

But OSE aspires specifically to build a big pile of cash – an endowment – that it will dole out to deserving open source projects.

It’s certainly needed. In 2023, Denis Pushkarev, maintainer of the widely used core-js library, vented his frustration with the fact that users of his software seldom offer financial support. “Free open source software is fundamentally broken,” he said.

The year before that, Christofer Dutz – creator of Apache PLC4X – lamented uncompensated use of his software. Earlier in 2022, Google talked up the need to support critical open source infrastructure, citing the log4j vulnerability.

But concerns about the sustainability of open source go back further still. Two years after the 2014 Heartbleed vulnerability – a dangerous flaw in OpenSSL – a Ford Foundation report noted that the OpenSSL project is critical internet infrastructure yet had just one full-time maintainer and earned less than $2,000 per year in donations.

As OSE points out, 95 percent of codebases rely on open source software, each of which has an average of 500 open source components. And yet 86 percent of open source contributors receive no payment for their work.

OSE founding chairman Konstantin Vinogradov, a venture capital investor, previously said he wanted to replicate the funding model that has sustained universities.

And he reiterated that aspiration in a Hacker News post announcing OSE.

Universities and the open source community, he argues, share reputation-based culture and functions, working together to create valuable ideas for the benefit of the public, educating each other, and commercializing only a portion of what’s produced.

“For universities, humanity has just two sustainable funding models: public spending or private endowments,” Vinogradov explained. “Government support won’t work for OSS at scale – it’s too globally decentralized. And yet nobody had built an OSS-focused endowment before. After understanding why, I started building one together with other OSS folks.”

Vinogradov said the OSE, a US 501(c)(3) tax-exempt charity, aims to make open source development more sustainable through a community-driven endowment. Donations will be invested and only investment income will be disbursed through grants – the principal funds will remain invested in the hope of growth.

Presently, the fund stands at around $700,000, thanks to contributions from more than 60 founding donors, including the founders of ClickHouse, curl, Elastic, Gatsby, HashiCorp, n8n, Nginx, Pydantic, Supabase, and Vue.js.

Donations go directly to the fund, and those who give over $1,000 can become OSE Members, which includes certain rights to participate in OSE governance.

The group has detailed its grant selection process on the OSE website and in its GitHub repository.

According to Vinogradov, “OSE won’t give money for commercial product development – it is dedicated to supporting existing highly-used nonprofit and independent OSS.”

Source: Open Source Endowment aims to raise big pile of money • The Register

Common Corpus, an open training set for AI, goes global – and so should support for it – Walled Culture

Quote

As many of the AI stories on Walled Culture attest, one of the most contentious areas in the latest stage of AI development concerns the sourcing of training data. To create high-quality large language models (LLMs) massive quantities of training data are required. In the current genAI stampede, many companies are simply scraping everything they can off the Internet. Quite how that will work out in legal terms is not yet clear. Although a few court cases involving the use of copyright material for training have been decided, many have not, and the detailed contours of the legal landscape remain uncertain.

However, there is an alternative to this “grab it all” approach. It involves using materials that are either in the public domain or released under a “permissive” licence that allows LLMs to be trained on them without any problems. There’s plenty of such material online, but its scattered nature puts it at a serious disadvantage compared to downloading everything without worrying about licensing issues. To address that, the Common Corpus was created and released just over a year ago by the French startup Pleias. A press release from the AI Alliance explains the key characteristics of the Common Corpus:

Truly Open: contains only data that is permissively licensed and provenance is documented

Multilingual: mostly representing English and French data, but contains at least 1[billion] tokens for over 30 languages

Diverse: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers

Extensively Curated: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed.

There are five main categories of material: OpenGovernment, OpenCulture, OpenScience, OpenWeb, and OpenSource:

OpenGovernment contains Finance Commons, a dataset of financial documents from a range of governmental and regulatory bodies. Finance Commons is a multimodal dataset, including both text and PDF corpora. OpenGovernment also contains Legal Commons, a dataset of legal and administrative texts. OpenCulture contains cultural heritage data like books and newspapers. Many of these texts come from the 18th and 19th centuries, or even earlier.

OpenScience data primarily comes from publicly available academic and scientific publications, which are most often released as PDFs. OpenWeb contains datasets from YouTube Commons, a dataset of transcripts from public domain YouTube videos, and websites like Stack Exchange. Finally, OpenSource comprises code collected from GitHub repositories which were permissibly licensed.

The initial release contained over 2 trillion tokens – the usual way of measuring the volume of training material, where tokens can be whole words and parts of words. A significant recent update of the corpus has taken that to over 2.267 trillion tokens. Just as important as the greater size, is the wider reach: there are major additions of material from China, Japan, Korea, Brazil, India, Africa and South-East Asia. Specifically, the latest release contains data for eight languages with more than 10 billion tokens (English, French, German, Spanish, Italian, Polish, Greek, Latin) and 33 languages with more than 1 billion tokens. Because of the way the dataset has been selected and curated, it is possible to train LLMs on fully open data, which leads to auditable models. Moreover, as the original press release explains:

By providing clear provenance and using permissibly licensed data, Common Corpus exceeds the requirements of even the strictest regulations on AI training data, such as the EU AI Act. Pleias has also taken extensive steps to ensure GDPR compliance, by developing custom procedures to enable personally identifiable information (PII) removal for multilingual data. This makes Common Corpus an ideal foundation for secure, enterprise-grade models. Models trained on Common Corpus will be resilient to an increasingly regulated industry.

Another advantage for many users is that material with high “toxicity scores” has already been removed, thus ensuring that any LLMs trained on the Common Corpus will have fewer problems in this regard.

The Common Corpus is a great demonstration of the power of openness and permissive copyright licensing, and how they bring benefits that other approaches can’t match. For example: “Common Corpus makes it possible to train models compatible with the Open Source Initiative’s definition of open-source AI, which includes openness of use, meaning use is permitted for ‘any purpose and without having to ask for permission’. ” That fact, along with the multilingual nature of the Common Corpus, would make the latest version a great fit for any EU move to create “public AI” systems, something advocated on this blog a few months back. The French government is already backing the project, as are other organisations supporting openness:

The Corpus was built up with the support and concerted efforts of the AI Alliance, the French Ministry of Culture as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).

This dataset was also made in partnership with Wikimedia Enterprise and Wikidata/Wikimedia Germany. We’re also thankful to our partner Libraries Without Borders for continuous assistance on extending low resource language support.

The corpus was stored and processed with the generous support of the AI Alliance, Jean Zay (Eviden, Idris), Tracto AI, Mozilla.

The unique advantages of the Common Corpus mean that more governments should be supporting it as an alternative to proprietary systems, which generally remain black boxes in terms of where their training data comes from. Publishers too would also be wise to fund it, since it offers a powerful resource explicitly designed to avoid some of the thorniest copyright issues plaguing the generative AI field today.

Source: Common Corpus, an open training set for AI, goes global – and so should support for it – Walled Culture

Loophole found that makes quantum cloning possible (kind of)

In quantum mechanics, the idea that quantum information can’t be duplicated is ironclad – or at least, it was. A surprising approach to backing up qubits, the basic units of quantum computers, appears to allow a sidestepping of this fundamental law of physics.

The no-cloning theorem was first discovered by researchers in the 1980s. It says that quantum states that describe all the information about a system can’t be copied. Attempting to measure the information to copy it would simply destroy the delicate quantum properties that you want to measure. This fact has proved important for quantum technologies like encryption, leading to simple protocols that prevent information from being copied and hacked.

Achim Kempf at the University of Waterloo in Canada and his colleagues have now shown that a quantum system can, in fact, be cloned, as long as the information about it is encrypted and enclosed with a special, one-off decryption key.

“You can make a lot of copies and generate redundancy in this way, but you have to encrypt the copies, and the decryption key can only be used once,” says Kempf. “This makes it compatible with a no-cloning theorem, because it says there can only ever be at most one clear, obvious, readable, non-encrypted copy of a qubit.”

[…]

Once they had proved this result theoretically, the team then showed that this protocol could work on a real IBM Heron 156-qubit quantum computing processor.

Because the technique is fairly resistant to noise and errors that are ubiquitous in today’s quantum computers, Kempf and his team found they could make hundreds of encrypted clones of single qubits, by repeating the process over and over again. “In fact, we ran out of real estate on the IBM processor. It holds only 156 qubits but we estimated that we can do more than 1000 encrypted clones before the [errors] make us stop.”

This modification to the no-cloning theorem could have uses for a quantum cloud storage or computing service, says Kempf. “If you send a file to Dropbox, it will save your data at least three times in three different computers that are geographically separated, so that if one is hit by fire, the other one by a flood, there’s a fair chance the third one survives,” says Kempf. “It used to be thought you can’t do that with quantum information, because you can’t clone it. But what we showed is that you can do it.”

[…]

Kempf agrees. “It’s not cloning. It’s encrypted cloning,” he says. “That’s just a refinement of the no-cloning theorem.”

Journal reference

Physical Review Letters DOI: 10.1103/y4y1-1ll6

Journal reference

arXiv DOI: arXiv:2602.10695

Source: Loophole found that makes quantum cloning possible | New Scientist

‘Likely the Largest Breach in U.S. History’: What You Need to Know About the Conduent Fiasco

Link

At least 26 million people have had their personal data stolen from Conduent, a company that provides printing, payment, and document processing services for some of the largest health insurance providers in the country. Some are already calling it one of the largest data breaches in U.S. history, exposing addresses, social security numbers, and health information to ransomware hackers.

Conduent first discovered it was the victim of a “cyber incident” over a year ago on January 13, 2025, according to a notice posted online by the company. The breach itself happened from October 21, 2024, to January 13, 2025, and involved data held by Conduent because the company provides services to health plans.

The data included names, social security numbers, unspecified medical information, and health insurance information. The company emphasized in its notice that “not every data element was present for every individual,” meaning that some people may have just had their social security number stolen but not their health insurance info, or vice versa.

The full scale of the breach is still unclear. Texas Attorney General Ken Paxton wrote last week that over 4 million Texans had their data stolen, but Fox News reports that number has jumped to 15.4 million people. Texas has a total population of 31 million, meaning that roughly half the entire state was impacted.

[…

Oregon reported on its consumer protection website that 10.5 million were swept up in the breach, which already brings the running total to about 26 million. But residents of other states have also received notices, including people in California, Delaware, Massachusetts, New Hampshire, and New Mexico. Some of the states have relatively small numbers, like Maine, which has just 374 people whose data was exposed, according to the state’s Attorney General.

Conduent, which is based in New Jersey, didn’t respond to questions asking about the full scope of the hack and what victims can do about it via email on Tuesday.

[…]

Source: ‘Likely the Largest Breach in U.S. History’: What You Need to Know About the Conduent Fiasco

Nearby Glasses Warns You When a Glasshole is Nearby

The app, called Nearby Glasses, has one sole purpose: Look for smart glasses nearby and warn you.

Get It On Google Play

This app notifies you when smart glasses are nearby. It uses company identificators in the Bluetooth data sent out by these. Therefore, there likely are false positives (e.g. from VR headsets). Hence, please proceed with caution when approaching a person nearby wearing glasses. They might just be regular glasses, despite this app’s warning.

The app’s author Yves Jeanrenaud takes no liability whatsoever for this app nor it’s functionality. Use at your own risk. By technical design, detecting Bluetooth LE devices might sometimes just not work as expected. I am no graduated developer. This is all written in my free time and with knowledge I taught myself.
False positives are likely. This means, the app Nearby Glasses may notify you of smart glasses nearby when there might be in fact a VR headset of the same manufacturer or another product of that company’s breed. It may also miss smart glasses nearby. Again: I am no pro developer.
However, this app is free and it’s source is available (though it’s not considered foss due to the non-commercial restrition), you may review the code, change it and re-use it (under the license).
The app Nearby Glasses does not store any details about you or collects any information about you or your phone. There are no telemetry, no ads, and no other nuisance. If you install the app via Play Store, Google may know something about you and collect some stats. But the app itself does not.
If you choose to store (export) the logfile, that is completely up to you and your liability where this data go to. The logs are recorded only locally and not automatically shared with anyone. They do contain little sensitive data; in fact, only the manufacturer ID codes of BLE devices encountered.

Use with extreme caution! As stated before: There is no guarantee that detected smart glasses are really nearby. It might be another device looking technically (on the BLE adv level) similar to smart glasses.
Please do not act rashly. Think before you act upon any messages (not only from this app).

Why?

  • Because I consider smart glasses an intolerable intrusion, consent neglecting, horrible piece of tech that is already used for making various and tons of equally truely disgusting ‘content’. 1, 2
  • Some smart glasses feature small LED signifying a recording is going on. But this is easily disabled, whilst manufacturers claim to prevent that and take no responsibility at all (tech tends to do that for decades now). 3
  • Smart glasses have been used for instant facial recognition before 4 and reportedly will be out of the box 5. This puts a lot of people in danger.
  • I hope this is app is useful for someone.

How?

  • It’s a simple rather heuristic approach. Because BLE uses randomised MAC and the OSSID are not stable, nor the UUID of the service announcements, you can’t just scan for the bluetooth beacons. And, to make thinks even more dire, some like Meta, for instance, use proprietary Bluetooth services and UUIDs are not persistent, we can only rely on the communicated device names for now.
  • The currently most viable approach comes from the Bluetooth SIG assigned numbers repo. Following this, the manufacturer company’s name shows up as number codes in the packet advertising header (ADV) of BLE beacons.
  • this is what BLE advertising frames look like:
Frame 1: Advertising (ADV_IND)
Time:  0.591232 s
Address: C4:7C:8D:1E:2B:3F (Random Static)
RSSI: -58 dBm

Flags:
  02 01 06
    Flags: LE General Discoverable Mode, BR/EDR Not Supported

Manufacturer Specific Data:
  Length: 0x1A
  Type:   Manufacturer Specific Data (0xFF)
  Company ID: 0x058E (Meta Platforms Technologies, LLC)
  Data: 4D 45 54 41 5F 52 42 5F 47 4C 41 53 53

Service UUIDs:
  Complete List of 16-bit Service UUIDs
  0xFEAA
  • According to the Bluetooth SIG assigned numbers repo, we may use these company IDs:
    • 0x01AB for Meta Platforms, Inc. (formerly Facebook)
    • 0x058E for Meta Platforms Technologies, LLC
    • 0x0D53 for Luxottica Group S.p.A (Who manufacturers the Meta Ray-Bans)
    • 0x03C2 for Snapchat, Inc., that makes SNAP Spectacles They are immutable and mandatory. Of course, Meta and other manufacturers also have other products that come with Bluetooth and therefore their ID, e.g. VR Headsets. Therefore, using these company ID codes for the app’s scanning process is prone to false positives. But if you can’t see someone wearing an Occulus Rift around you and there are no buildings where they could hide, chances are good that it’s smart glasses instead.
  • During pairing, the smart glasses usually emit their product name, so we can scan for that, too. But it’s rare we will see that in the field. People with the intention to use smart glasses in bars, pubs, on the street, and elsewhere usually prepare for that beforehand.
  • When the app recognised a Bluetooth Low Energy (BLE) device with a sufficient signal strength (see RSI below), it will push an alert message. This shall help you to act accordingly.

[…]

Source: Github repo

AWS says 600+ FortiGate firewalls hit in AI-augmented attack

Cybercriminals armed with off-the-shelf generative AI tools compromised more than 600 internet-exposed FortiGate firewalls across 55 countries in just over a month, according to a new incident report from AWS.

The campaign, which ran from mid-January to mid-February, relied less on clever zero-days and more on the equivalent of trying every digital door handle – just at machine speed, with AI lending a hand behind the scenes.

AWS says the financially motivated Russian-speaking crew behind the campaign scanned for exposed FortiGate management interfaces, tried commonly reused or weak credentials, and then hoovered up configuration files once inside, giving them a roadmap of victim networks.

The cloud giant’s security team says the actor used multiple commercial AI tools to generate attack playbooks, scripts, and operational notes, effectively allowing a relatively low-skilled outfit to run a campaign that would previously have required more people or time. Investigators even found evidence of AI-generated code and planning artifacts on compromised infrastructure, suggesting the tools were embedded throughout the workflow rather than just used for the odd bit of scripting.

“The volume and variety of custom tooling would typically indicate a well-resourced development team,” said CJ Moses, CISO at Amazon. “Instead, a single actor or very small group generated this entire toolkit through AI-assisted development.”

Once the firewall was cracked, the attackers pulled configuration files containing administrator and VPN credentials, network topology details, and firewall rules. From there, they moved deeper into environments, going after Active Directory, dumping credentials, and probing for ways to move laterally. Backup systems, including Veeam servers, were also on the shopping list.

AWS says the tooling it observed was functional but rough around the edges, with simplistic parsing logic and the sort of redundant comments that suggest a machine wrote the first draft. That didn’t stop it from being effective enough for broad automation, though the miscreants reportedly tended to abandon targets that put up too much resistance and move on to softer ones, reinforcing the idea that volume rather than finesse was the winning strategy.

Geographically, the activity was opportunistic rather than tightly targeted, with victims spread across multiple regions, including parts of Europe, Asia, Africa, and Latin America. Clusters of activity suggested that some compromises may have enabled access to managed service providers or larger shared environments, amplifying downstream risk.

The report leans heavily on the idea that basic hygiene – keeping management interfaces off the public internet, enforcing multi-factor authentication, and not recycling passwords – would have shut down much of the activity before it got going.

The findings land just weeks after Google warned that criminals are increasingly wiring generative AI directly into their operations, including its own Gemini AI chatbot, for tasks ranging from reconnaissance and target profiling to phishing and malware development.

Source: AWS says 600+ FortiGate firewalls hit in AI-augmented attack • The Register

IDMerit age verification leak: More Than 1 Billion IDs And Photos Exposed

[…] Cybersecurity researchers have confirmed they discovered a massive “treasure trove” of unsecured data, with information on individuals from 26 countries, including, at the top of the list, the U.S., which appears to be linked to an AI-powered identity verification service. Totalling almost a terabyte of data and 1 billion records, the exposed information included national IDs, full names, addresses, phone numbers, and email.

Just when you think things couldn’t get any worse, those same researchers have now disclosed yet another AI-related data leak. This time impacting users of an Android app that deploys AI to provide “cinematic makeovers” for selfies. While not in quite the same league as the first, that will be cold comfort if your photos and videos were among the 2 million left exposed.

Unsecured AI Service Know Your Customer Data Exposed In 1 Billion Record Leak

There is a danger, given the sheer number of published reports concerning data leaks, including, most recently, 48 million Gmail passwords and usernames, as part of a 149 million records exposed database event, that we become used to such incidents and shrug them off. When an exposed database contains 1 billion records, with 203 million of them impacting the U.S., questions need to be asked and notice taken. The Cybernews research team has confirmed that the databases, a collection of them within a single exposed MongoDB instance, were discovered on November 11, and the company concerned, which they said was an AI-powered digital identity verification provider called IDMerit, was contacted on November 12. The leak was plugged by the company the same day.

[…]

Source: New AI Data Leaks—More Than 1 Billion IDs And Photos Exposed

Age verification checks are now in force in the UK because of the Online Safety Act, but with the Discord fallout, it seems like one bad idea after another

Currently, I can’t check my Bluesky direct messages until I’ve allowed the Epic Games-owned KWS to look at either my bank card, my ID, or my wizened visage. As I’m based in the UK, it’s not just Bluesky I’ve got to worry about either, with similar verification processes now present on Reddit, Discord, and even my partner’s Xbox.

This is all due to the Online Safety Act, which came into effect in the UK last year. For many, these age checks are an annoyance at best—but they also represent something that will have ramifications far beyond the British Isles. The UK’s Act was designed in part to ensure children in the UK could not easily access “harmful content.” This is a broad term that includes but is not limited to pornography, content that promotes “self-harm, eating disorders, or suicide,” and “bullying”.

To comply with the act and differentiate children from the adults, many platforms have opted for age-gates like the one I’m encountering on Bluesky. Almost 70% of Brits surveyed shortly after the Online Safety Act came into effect said they supported it…though 64% didn’t think it would be all that effective. Indeed, I could log into a VPN to get past the UK-based Bluesky block—though unfortunately for me, I am stubborn, lazy, and cheap (apologies if you’ve been trying to get ahold of me).

Besides all that, I’m not especially keen to hand over my personal data to a third-party age verification vendor such as KWS for data privacy reasons. As recently as October, a Discord security breach may have leaked 70,000 age-verification ID photos. Discord’s primary age-verification partner, K-ID, was keen to clarify that it was not involved.

As Jacob has previously outlined, there are better ways to implement age checks. As it stands, though, I’m not naive enough to think the data I keep elsewhere is in hands that are any safer. However, not submitting to an age assurance check makes for one less point of failure from which my likeness or even my official documents can leak out.

Discord first announced it would be using Brits as age assurance guinea pigs back in April 2025, but it turns out that may have all been prologue. Just in case you’ve been napping under a cool mossy rock for the last while, the social platform caused quite a stir this month when it announced it would be rolling out age verifying facial scans and ID checks globally this March. The case can be made that it is ‘complying in advance,’ as the UK’s approach to online safety potentially serves as a preview for PC gamers further afield.

Discord hackers distribute malware that can stay persistent for months

(Image credit: TheDigitalArtist – Pixabay & Discord)

On the one hand, yeah, I’d rather children growing up today didn’t see all the things I saw thanks to having unfettered internet access throughout the early oughts.

Why not? I survived rotten.com and goatse – but then again, the internet didn’t have much in the way of fake news, hate speech or echo chambers…

I’d also rather young’uns now didn’t have to experience all the harassment I experienced at the hands of my own peers, newly empowered by that unfettered internet access.

On the other hand, the internet answered a lot of questions I was absolutely not going to ask my parents; when I see a vague term like “harmful content” I do have to wonder what genuinely educational resources on the wider internet—say, regarding art history or personal health—might end up age-gated because someone somewhere has decided they’re tantamount to ‘pornography.’

I’m only just the other side of 30, but Section 28 was still in effect for some of my school years. For those who don’t know, Section 28 was a law that prevented schools in England, Scotland, and Wales from doing anything that could be interpreted as “intentionally [promoting] homosexuality or [publishing] material with the intention of promoting homosexuality”. So, until the law was repealed in the early 2000’s, a lot of schools simply pretended LGBTQIA+ folks didn’t exist. The internet, for all of its faults, helped to fill that deafening silence for me.

A screenshot of a 3D model being used to pass the DIscord age verification system

(Image credit: PromptPirate on GitHub)

Even so, I remember there being content blocks back in my day, too, and I know I found more than a few ways around those. Indeed, if we take just Discord today, our James has found not one but two different ways to fool its face scans—though the platform may already be formulating a counter to these workarounds.

Shortly after issuing assurances that not all users will even have to undergo an age check, a since-edited support article revealed that some UK users “may be part of an experiment where your information will be processed by an age-assurance vendor, Persona.” Amid reports of folks easily fooling its primary third-party vendor’s age verification checks, Discord may have been seeking to diversify its defences.

Persona’s investors include Peter Thiel, co-founder of ICE’s premier surveillance provider, Palantir. Though Persona and Palantir are two totally separate companies that do not share either data or operations, that’s still a pretty grimy connection. Not least of all because earlier this week, the US Department of Homeland Security reportedly subpoenaed a number of major online platforms—including Discord, Reddit, Google, and Meta—in order to obtain the personal details of accountholders who had been critical of ICE or identified the locations of its agents. We don’t yet know if Discord complied, though we have reached out for comment.

EDMONTON, CANADA - APRIL 28: An image of a woman holding a cell phone in front of the Discord logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada.

(Image credit: Artur Widak/NurPhoto via Getty Images)

There is an even worse wrinkle in the Discord-Persona ‘experiment’: while Discord had previously said that data like age verification face scans would only be stored and processed on users’ own devices, those who ended up part of the Persona experiment may have their information “temporarily stored for up to 7 days, then deleted.”

Indeed, some security researchers are already claiming to have “found a Persona frontend exposed to the open internet on a US government-authorized server.”

All of that said, Persona is not part of Discord’s long-term strategy, with the platform telling Kotaku earlier this week that its dealings with the vendor were part of a “limited test” that has since been concluded. That leaves K-id’s on-device processing in effect, but even that doesn’t necessarily end the privacy nightmare. Data breaches usually leave platforms scrambling for user good will, but Discord seems all too happy to keep walking into rakes.

One could jump ship and shop around for a free Discord alternative as I recently did, but all of the platforms I tested will likely have to implement some sort of age assurance check if they haven’t already in order to continue serving users based in the UK in the future. That doesn’t mean I’ll be letting them scan my face any time soon; I may have to deploy Norman Reedus and his funky foetus before long as third-party age verification vendors have done little to earn my trust or a gander at my actual face.

Source: Age verification checks are now in force in the UK because of the Online Safety Act, but with the Discord fallout, it seems like one bad idea after another | PC Gamer

How shaming unethical brands makes companies improve their behavior

This article is riddled in huge assumptions about causality and the amplification that social media can offer, completely unhampered by any research. But the actual research that they do have interspersed in the article is interesting.

[…]Discovering that an ordinary purchase may be tied to exploitation or environmental damage creates a jolt of personal responsibility. In our research, we found that when environmental consequences are clearly linked to people’s own buying choices, many are willing to switch products—especially when credible alternatives exist.

But guilt is private. It nudges personal behavior. It does not automatically reshape systems. The shift happens when private discomfort becomes public voice.

Consumers are often also the first to make hidden environmental harms visible. They post evidence on social media. They question corporate claims. They compare sustainability promises with independent reporting. They organize petitions, boycotts and review campaigns. By shining a spotlight on the truth, the scrutiny shifts from shoppers to brands.

That shift matters because modern brands depend on trust. Reputation is an asset. When sustainability claims are publicly challenged, credibility is at risk. Research in organisational behaviourshows that firms respond quickly to threats to legitimacy. Reputational damage affects customer loyalty, investor confidence and regulatory attention.

[…]

When the gap between what companies say and what they do becomes visible, maintaining that gap becomes harder.

Our research explores how that visibility can be strengthened. The findings were clear. When environmental and social consequences are personalized and traceable, sustainability feels less distant. People see both their own role and the role of particular firms. That dual awareness encourages two responses: behavioral change driven by guilt and corporate accountability driven by shame.

Shame works because it is social. Brands care about how they are seen. When the negative environmental and social effects of supply chains can be publicly connected to named products, corporate narratives become contestable in real time.

[…]

Source: How shaming unethical brands makes companies improve their behavior

3D-printing platform produces working electric motor in hours

A broken motor in an automated machine can bring production on a busy factory floor to a halt. If engineers can’t find a replacement part, they may have to order one from a distributor hundreds of miles away, leading to costly production delays.

It would be easier, faster, and cheaper to make a new motor onsite, but fabricating electric machines typically requires specialized equipment and complicated processes, which restricts production to a few manufacturing centers.

In an effort to democratize the manufacturing of complex devices, MIT researchers have developed a multimaterial 3D-printing platform that could be used to fully print electric machines in a single step.

They designed their system to process multiple functional materials, including electrically conductive materials and magnetic materials, using four extrusion tools that can handle varied forms of printable material. The printer switches between extruders, which deposit material by squeezing it through a nozzle as it fabricates a device one layer at a time.

The researchers used this system to produce a fully 3D-printed electric linear motor in a matter of hours using five materials. They only needed to perform one post-processing step for the motor to be fully functional.

The assembled device performed as well or better than similar motors that require more complex fabrication methods or additional post-processing steps.

In the long run, this 3D printing platform could be used to rapidly fabricate customizable electronic components for robots, vehicles, or medical equipment with much less waste.

[…]

Source: 3D-printing platform rapidly produces complex electric machines | MIT News | Massachusetts Institute of Technology

Elons Falcon 9 dumps huge amounts of Lithium over the EU during burn up

The SpaceX Falcon 9 rocket that burned up over Europe last year left a massive lithium plume in its wake, say a group of scientists. They warn the disaster is likely a sign of things to come as Earth’s atmosphere continues to become a heavily trafficked superhighway to space.

In a paper published Thursday, an international group of scientists reports what they say is the first measurement of upper-atmosphere pollution resulting from the re-entry of space debris, as well as the first time ground-based light detection and ranging (lidar) has been shown to be able to detect space debris ablation.

The measurements stem from a SpaceX Falcon 9 upper stage that sprung an oxygen leak about a year ago, sending it into an uncontrolled re-entry. Then it broke up and rained debris down on Poland. The rocket not only littered farm fields, but also injected lithium into the Mesosphere and Lower Thermosphere (MLT), where ground-based sensors detected a tenfold increase at an altitude of 96 km about 20 hours after the rocket re-entered the atmosphere, according to the paper.

Lithium was selected for the study because of its considerable presence in spacecraft, both in lithium-ion batteries and lithium-aluminum alloy used in the construction of spacecraft. A single Falcon 9 upper stage, like the one that broke up over Poland and released the lithium plume, is estimated to contain 30 kg of lithium just in the alloy used in tank walls. 

By contrast, around 80 grams of lithium enter the atmosphere per day from cosmic dust particles, the researchers noted. 

“This finding supports growing concerns that space traffic may pollute the upper atmosphere in ways not yet fully understood,” the paper notes, adding that the continued re-entry of spacecraft and satellites is of particular concern given how the composition of spacecraft is different from natural meteoroids.

“Satellites and rocket stages introduce engineered materials such as aluminium alloys, composite structures, and rare earth elements from onboard electronics, substances rarely found in natural extraterrestrial matter,” the paper explained. “The consequences of increasing pollution from re-entering space debris on radiative transfer, ozone chemistry, and aerosol microphysics remain largely unknown.”

The effect on Earth’s atmosphere posed by spacecraft and satellite re-entry is one that’s been a growing concern for astrophysicists like Harvard sky-watcher Jonathan McDowell, who has echoed similar concerns to The Register as the European scientists raised in their paper.

[…]

Source: Euro boffins track lithium plume from Falcon 9 burn-up • The Register

Discord’s First Age-Verification ‘Experiment’ Alarms Hackers: Supplier “Persona” not only leaky, but also uses IDs for various purposes not age related

Last week, Discord users reported seeing prompts to submit personal information to Persona, a third-party age-verification service. As Discord commits to universal age-verification, the new measures have come under intense scrutiny after previous security failures. Now a trio of hacktivists say they’ve successfully breached Persona, getting a closer look at how the company uses submitted biometrics. They say their findings raise alarms beyond the possibility of leaks.

According to The Rage, Persona’s front-end security left a lot to be desired. Worse, however, were investigative findings that suggested Persona’s surveillance of the users whose data it collected was way more sprawling than originally believed.

“It was initially meant to be a passive recon investigation,” writes vmfunc, a cybersecurity researcher and one of the hackers, “that quickly turned into a rabbit hole deep dive into how commercial AI and federal government operations work together to violate our privacy every waking second.”

On top of finding it surprisingly easy to access data gathered by Persona, the research showed that faces and biometrics were not just being scanned for age verification, but flagged for suspicious behavior and bounced off watchlists as well. To some, particularly those who don’t worry about their face being deemed “suspicious,” this may not sound like an Orwellian level of intrusion, until you remember Persona’s full network.

Persona received $150 million in 2021 from the Founders Fund, a long-running tech investor group headed by Peter Thiel. Thiel’s main business, on top of palling around in Jeffery Epstein’s emails and waiting for the antichrist, is Palantir, an intentionally ominously-named data brokering service that is currently peddling user information to support ICE raids. The findings of vmfunc and co’s research doesn’t directly tether Persona and Discord’s operations to Palantir or Thiel, but it wouldn’t be conspiratorial to point out that all this data seems to be funnelling along similar slopes.

Trust but verify

Persona has confirmed the breach, CEO Rick Song corresponding and even thanking the hackers for flagging the security exploit. This has not, however, tempered concerns among those hacktivists about how the user information is ultimately being used.

“Transparently, we are actively working on a couple of potential contracts which would be publicly visible if we move forward,” writes Christie Kim, chief operating officer at Persona, in an email regarding the security breach and speculation around Discord. “However, these engagements are strictly for workforce account security of government employees and do not include ICE or any agency within the Department of Homeland Security.”

After the alarm was initially raised about Persona, Discord claimed its work with the Thiel-backed firm was only temporary, and that it didn’t have new contacts with it moving forward. It also promised user info was being wiped from servers within seven days of being gathered.

Source: Discord’s First Age-Verification ‘Experiment’ Alarms Hackers

Man accidentally gains control of 7,000 DJI robot vacuums – with live camera feeds, microphone audio, maps, and status data

A software engineer’s earnest effort to steer his new DJI robot vacuum with a video game controller inadvertently granted him a sneak peak into thousands of people’s homes.

While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI’s remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries. The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools, all without their owners ever knowing.

robot vaccum
The DJI Romo. Image: DJI

Luckily, Azdoufal chose not to exploit that. Instead, he shared his findings with The Verge, which quickly contacted DJI to report the flaw. While DJI tells Popular Science the issue has been “resolved,” the dramatic episode underscores warnings from cybersecurity experts who have long-warned that internet-connected robots and other smart home devices present attractive targets for hackers.

[…]

Source: Man accidentally gains control of 7,000 robot vacuums | Popular Science

The Stop Killing Games campaign will set up NGOs in the EU and US

The Stop Killing Games campaign is evolving into more than just a movement. In a YouTube video, the campaign’s creator, Ross Scott, explained that organizers are planning to establish two non-governmental organizations, one for the European Union and another for the US. According to Scott, these NGOs would allow for “long-term counter lobbying” when publishers end support for certain video games.

“Let me start off by saying I think we’re going to win this, namely the problem of publishers destroying video games that you’ve already paid for,” Scott said in the video. According to Scott, the NGOs will work on getting the original Stop Killing Games petition codified into EU law, while also pursuing more watchdog actions, like setting up a system to report publishers for revoking access to purchased video games.

The Stop Killing Games campaign started as a reaction to Ubisoft’s delisting of The Crew from players’ libraries. The controversial decision stirred up concerns about how publishers have the ultimate say on delisting video games. After crossing a million signatures last year, the movement’s leadership has been busy exploring the next steps.

According to Scott, the campaign leadership will meet with the European Commission soon, but is also working on a 500-page legal paper that reveals some of the industry’s current controversial practices. In the meantime, the ongoing efforts have led to a change of heart from Ubisoft since the publisher updated The Crew 2 with an offline mode.

Source: The Stop Killing Games campaign will set up NGOs in the EU and US

IDMerit data breach: 1 billion records of personal data exposed in ID verification leak – which no-one except everyone saw coming.

Because IDMerit is an AI-powered KYC (Know Your Customer) provider, the data it collects is incredibly sensitive. The unsecured 1-terabyte database didn’t just leak passwords—it leaked the core personal identifiers used for your financial and digital life. The following structured data was left open for anyone to download:

  • Full names
  • Addresses
  • Post codes
  • Dates of birth
  • National IDs
  • Phone numbers
  • Genders
  • Email addresses
  • Telco metadata
  • Breach status and social profile annotations

The last data point – breach status and social profile annotations – could refer to a database identifier indicating whether the data originated from a data breach or a leaked database. However, at this point, the true meaning of the data point is unclear. The team noted that this specific data point was present only in some regions.

“At this scale, downstream risks include account takeovers, targeted phishing, credit fraud, SIM swaps, and long-tail privacy harms. Industry-wide, the case underlines how third-party identity vendors have become critical infrastructure and can become single points of catastrophic failure,” our team explained.

Who is IDMerit and How Did This Happen?

Our team believes the exposed database belongs to IDMerit, an AI-powered digital identity verification solutions provider. The company serves the fintech and financial services sectors, helping businesses with real-time verification tools. KYC (Know Your Customer) practices are a global norm for users to verify their identities when setting up various accounts.

Our researchers noticed the exposed instance on November 11th, 2025 and immediately contacted the company, which promptly secured the database. While there is no current evidence of malicious misuse, automated crawlers set up by threat actors constantly prowl the web for exposed instances, downloading them almost instantly once they appear.

Global data leak spans multiple countries

What’s most striking about the IDMerit data leak is its scale and global geography, with three billion records spanning over 20 countries. Several databases appeared to contain overlapping slices for the same country. However, our team believes most of the records were unique.

The country with the most exposed records was the United States, having over 203 million records leaked. The US was followed by Mexico (124M) and the Philippines (72M). Behind the first three, we see a trio of European nations: Germany (61M), Italy (53M), and France (53M).

[…]

Source: IDMerit data breach: 1 billion records of personal data exposed in KYC data leak | Cybernews

scary stories which predicted this a long long time coming:

https://www.linkielist.com/?s=age+verification&submit=Search

Country that censors: criticism of prez by lawfare; books; reporters in the white house; etc Is Working on a Site to Help Europeans Bypass Content Bans on Hate Speech

The U.S. State Department is reportedly working on an online portal that would allow people in Europe and other regions to access content banned by their governments. The move comes at a time when conservative figures like Elon Musk and J.D. Vance have railed against European attempts to clamp down on hate speech, terrorist propaganda, and revenge porn.

Reuters reported Wednesday, citing unnamed sources, that the initiative is intended to fight censorship and could include a virtual private network (VPN) feature.

The portal would reportedly be hosted at Freedom.gov. The site currently displays a landing page featuring a small animation of Paul Revere on horseback above the words “Freedom is Coming.” Smaller text below reads, “Information is power. Reclaim your human right to free expression. Get Ready.”

[…]

Reuters reported that the portal was expected to launch at the conference, but was delayed.

“We don’t comment on draft laws, and that’s what it is,” European Commission Spokesperson Thomas Regnier said when asked about the portal during a press briefing today. “Let me say that the Commission does not block access to websites. It’s up to national authorities to do this kind of thing. If a website breaches EU law or international law, talking about sites which promote hate speech, for example, or have terrorist content, obviously that does not belong in Europe. That’s why we have a regulation on digital services, the DSA, which protects freedom of expression.”

[…]

Ironically, The Guardian reported today that DOGE cuts to the State Department and U.S. Agency for Global Media’s Internet Freedom program have effectively gutted the program.

The initiative funded grassroots tools to help people bypass government internet controls worldwide. It distributed over $500 million over the past decade but issued no funding in 2025, according to The Guardian.

Source: The US Is Working on a Site to Help Europeans Bypass Content Bans on Hate Speech: Report

MS demostrates Laser writing in glass for dense, fast, efficient 10k+ year archival data storage

Long-term preservation of digital information is vital for safeguarding the knowledge of humanity for future generations. Existing archival storage solutions, such as magnetic tapes and hard disk drives, suffer from limited media lifespans that render them unsuitable for long-term data retention1,2,3. Optical storage approaches, particularly laser writing in robust media such as glass, have emerged as promising alternatives with the potential for increased longevity. Previous work4,5,6,7,8,9,10,11,12,13,14,15,16 has predominantly optimized individual aspects such as data density but has not demonstrated an end-to-end system, including writing, storing and retrieving information. Here we report an optical archival storage technology based on femtosecond laser direct writing in glass that addresses the practical demands of archival storage, which we call Silica. We achieve a data density of 1.59 Gbit mm−3 in 301 layers for a capacity of 4.8 TB in a 120 mm square, 2 mm thick piece of glass. The demonstrated write regimes enable a write throughput of 25.6 Mbit s−1 per beam, limited by the laser repetition rate, with an energy efficiency of 10.1 nJ per bit. Moreover, we extend the storage ability to borosilicate glass, offering a lower-cost medium and reduced writing and reading complexity. Accelerated ageing tests on written voxels in borosilicate suggest data lifetimes exceeding 10,000 years.

[…]

Source: Laser writing in glass for dense, fast and efficient archival data storage | Nature

Copilot summarises emails it has been specifically told not to read

Microsoft has some sort of apology (at the bottom) saying that copilot permissions did not extend beyond the user permissions, but that merrily skips along the fact that copilot permissions are not equal to user permissions: this is a governance issue: data ingested by copilot is used as training data. MS cannot guarantee that this will not be moved to a US server, where the data can be (and is!) read by the US government and given to competitors.

Microsoft 365 Copilot Chat has been summarizing emails labeled “confidential” even when data loss prevention policies were configured to prevent it.

Though there are data sensitivity labels and data loss prevention policies in place for email, Copilot has been ignoring those and talking about secret stuff in the Copilot Chat tab. It’s just this sort of scenario that has led 72 percent of S&P 500 companies to cite AI as a material risk in regulatory filings.

Redmond, earlier this month, acknowledged the problem in a notice to Office admins that’s tracked as CW1226324, as reposted by the UK’s National Health Service support portal. Customers are said to have reported the problem on January 21, 2026.

“Users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat,” the notice says. “The Microsoft 365 Copilot ‘work tab’ Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured.”

Microsoft explains that sensitivity labels can be applied manually or automatically to files as a way to comply with organizational information security policies. These labels may function differently in different applications, the company says.

The software giant’s documentation makes clear that these labels do not function in a consistent way.

“Although content with the configured sensitivity label will be excluded from Microsoft 365 Copilot in the named Office apps, the content remains available to Microsoft 365 Copilot for other scenarios,” the documentation explains. “For example, in Teams, and in Microsoft 365 Copilot Chat.”

DLP, implemented through applications like Microsoft Purview, is supposed to provide policy support to prevent data loss.

“DLP monitors and protects against oversharing in enterprise apps and on devices,” Microsoft explains. “It targets Microsoft 365 locations, like Exchange and SharePoint, and locations you add, like on-premises file shares, endpoint devices, and non-Microsoft cloud apps.”

In theory, DLP policies should be able to affect Microsoft 365 Copilot and Copilot Chat. But that hasn’t been happening in this instance.

The root cause is said to be “a code issue [that] is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place.”

In a statement provided to The Register after this story was filed, a Microsoft spokesperson said, “We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren’t already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access. A configuration update has been deployed worldwide for enterprise customers.” ®

Source: Copilot Chat bug bypasses DLP on ‘Confidential’ email • The Register

Survey of over 12,000 EU firms shows AI adoption increases labour productivity levels by 4% on average, with no evidence of reduced employment in the short run for medium + large firms

Artificial intelligence promises to reshape economies worldwide, but firm-level evidence on its effects in Europe remains scarce. This column uses survey data to examine how AI adoption affects productivity and employment across more than 12,000 European firms. The authors find that AI adoption increases labour productivity levels by 4% on average in the EU, with no evidence of reduced employment in the short run. The productivity benefits, however, are unevenly distributed. Medium and large firms, as well as firms that have the capacity to integrate AI through investments in intangible assets and human capital, experience substantially stronger productivity gains.

[…]

we find that on average, AI adoption levels are similar in the EU and the US. Notably, important heterogeneity emerges beneath the surface. Financially developed EU countries – such as Sweden and the Netherlands – match US adoption rates, with around 36% of firms using big data analytics and AI in 2024. In contrast, firms in less financially developed EU economies, such as Romania and Bulgaria, lag substantially behind, with adoption rates around 28% in 2024. Figure 1 illustrates this divide, showing how the gap has persisted and even widened in recent years.

Adoption also varies dramatically by firm size. Among large firms (more than 250 employees), 45% have deployed AI, compared with only 24% of small firms (10 to 49 employees). This echoes classic patterns in technology diffusion (Comin and Hobijn 2010): larger firms possess the resources, technical expertise, and economies of scale needed to absorb integration costs. AI-adopting firms are also systematically different – they invest more, are more innovative, and face tighter constraints in finding skilled workers. These patterns suggest that simply observing which firms adopt AI and comparing their performance could yield misleading results, as adoption itself is endogenous to firm characteristics.

Isolating AI’s causal effect

To credibly identify the causal effect of AI on productivity, we develop a novel instrumental variable strategy, inspired by Rajan and Zingales’ (1998) seminal work on financial dependence and growth. Their key insight was that industry characteristics measured in one economy – where they are arguably less affected by local distortions – can serve as an exogenous source of variation when applied to other countries.

We extend this logic to the firm level. For each EU firm in our sample, we identify comparable US firms – matched on sector, size, investment intensity, innovation activity, financing structure and management practices. We then assign the AI adoption rate of these matched US firms as a proxy for the EU firm’s exogenous exposure to AI. Because US firms operate under different institutional, regulatory and policy environments, their adoption patterns capture technological drivers that are plausibly independent of EU-specific factors. Rigorous propensity-score balancing tests confirm that our matched US and EU firms are virtually identical across key observable characteristics, validating the identification strategy. Our analysis draws on survey data from EIBIS combined with balance sheet data from Moody’s Orbis.

Productivity gains without job losses

Our results reveal three key findings. First, AI adoption causally increases labour productivity levels by 4% on average in the EU. This effect is statistically robust and economically meaningful

[…]

Second, and crucially, we find no evidence that AI reduces employment in the short run. While naïve comparisons suggest AI-adopting firms employ more workers, this relationship disappears once we account for selection effects through our instrumental variable approach. The absence of negative employment effects, combined with significant productivity gains, points to a specific mechanism: capital deepening. AI augments worker output – enabling employees to complete tasks faster and make better decisions – without displacing labour

[…]

Third, AI’s productivity benefits are far from evenly distributed. Breaking down our results by firm size reveals that medium and large companies experience substantially stronger productivity gains than their smaller counterparts (see Figure 2). This differential effect reflects the role of scale in absorbing AI integration costs and accessing complementary assets – data infrastructure, technical talent, and organisational capacity to redesign workflows. The finding raises concerns about widening productivity gaps between firms and regions, particularly given Europe’s industrial structure, which is dominated by small and medium-sized enterprises.

[…]

Source: How AI is affecting productivity and jobs in Europe | CEPR

Leaked Email Suggests Ring Plans To Expand ‘Search Party’ Surveillance Beyond Dogs, surprising? Not really.

Ring’s AI-powered “Search Party” feature, which links neighborhood cameras into a networked surveillance system to find lost dogs, was never intended to stop at pets, according to an internal email from founder Jamie Siminoff obtained by 404 Media.

Siminoff told employees in early October, shortly after the feature launched, that Search Party was introduced “first for finding dogs” and that the technology would eventually help “zero out crime in neighborhoods.” The on-by-default feature faced intense backlash after Ring promoted it during a Super Bowl ad. Ring has since also rolled out “Familiar Faces,” a facial recognition tool that identifies friends and family on a user’s camera, and “Fire Watch,” an AI-based fire alert system.

A Ring spokesperson told the publication Search Party does not process human biometrics or track people.

Source: Leaked Email Suggests Ring Plans To Expand ‘Search Party’ Surveillance Beyond Dogs | Slashdot