Wind Power Is Taking Off In China– All The Way To 2000 M AGL

The S2000 at a much lower altitude than 2000 m.

2000 m above ground level (AGL), winds are stronger and much, much more consistent than they are at surface. Even if the Earth were a perfect sphere, there’d be a sluggish boundry layer at the surface, but since it’s got all these interesting bumps and bits and bobs, it’s not just sluggish but horribly turbulent, too. Getting above that, as much as possible, is why wind turbines are on big towers. Rather than build really big tower, Beijing Lanyi Yunchuan Energy Technology Co. has gone for a more ambitious approach: an aerostat to take power from the steady winds found at high altitude. Ambitiously called the Stratosphere Airborne Wind Energy System (SAWES), the megawatt-scale prototype has recently begun feeding into the grid in Yibin, Sichuan Province.

The name might be a bit ambitious, since its 2000 m test flight is only one tenth of the way to the stratosphere, but Yibin isn’t a bad choice for testing: as it is well inland, the S2000 prototype won’t have to contend with typhoons or other ocean storms. The prototype is arguably as ambitious as the name: its 12 flying turbines have a peak capacity of three megawatts. True, there are larger turbines in wind farms right now, but at 60 m in length and 40 m in diameter, the S2000 has a lot of room to grow before hitting any kind of limit or even record for aerostats. We’re particularly interested in the double-hull construction– it would seem the ring of the outer gas bag would do a good job funneling and accelerating air into those turbines, but we’d love to see some wind tunnel testing or even CFD renderings of what’s going on in there.

A rear view shows the 12 turbines inside the double hull. It should guide air into the gap, but we wonder how much turbulence the trusses in there are making.

During its first test flight in January 2026, the system generated generated 385 kilowatt-hours of electricity over the course of 30 minutes. That means it averaged about 25% capacity for the test, which is a good safe start. Doubtless the engineers have a full suite of test flights planned to demonstrate the endurance and power production capabilities of this prototype. Longer flights at higher capacity may have already happened by the time you read this.

Flying wind turbines isn’t a new idea by any means; a few years ago we featured this homemade kite generator, and the pros have been in on it too. Using helium instead represents an interesting design choice–on the plus side, its probably easier to control, and obviously allowing large structures, but the downside is the added cost of the gas. It will be interesting to see how it develops.

We’re willing to bet it catches on faster than harvesting wind energy from trees.

All images from Beijing Lanyi Yunchuan Energy Technology Co., Ltd.

Source: Wind Power Is Taking Off In China– All The Way To 2000 M AGL | Hackaday

Microsoft starts to offer Local offerings of their Azure and 365 Cloud Products thanks to the EU

Quote

Azure Local can now run fully disconnected with no cloud connectivity, Microsoft confirmed at the London leg of its AI tour.

The latest change comes amid heightened trade and geopolitical tensions between the US administration and Europe, with more customers in the trading bloc seeking reassurances about digital sovereignty.

Like rival US hyperscalers, Microsoft has rolled out initiatives in Europe in a bid to address jittery locals worried about the possibility – no matter how remote – of service interruption or their data being accessed by American officials under the US CLOUD Act.

In March, Microsoft completed its EU Data Boundary service, then added more features in November. Yet for a growing number of organizations in Europe, only infrastructure under their direct control will do.

Azure Local (formerly Azure Stack HCI) is Microsoft’s answer to those concerns. Using specialized hardware, Azure Local lets customers run workloads on-premises. However, it still needed to phone home occasionally – its management via Azure Arc ran in the cloud, and pulling the plug for more than 30 days resulted in reduced functionality.

By making disconnected operations available in Azure Local, organizations can “now run mission-critical infrastructure with Azure governance and policy control, with no cloud connectivity, optimizing continuity for sovereign, classified or isolated environments,” Microsoft said this week.

In other words, no more calling back to the mothership.

Microsoft has also made Microsoft 365 Local available (think Exchange Server, SharePoint Server, and Skype for Business Server) and announced Foundry Local (only available to “qualified customers”).

“This brings the richness of Microsoft’s enterprise AI capabilities to on-premises systems, complete with local inferencing and APIs that operate completely within customer-controlled data boundaries,” Microsoft said.

Microsoft’s sovereignty claims may ring hollow for some after it admitted in France last year that it could not guarantee sovereignty if it were compelled to hand data to the US government. The ability to completely pull the plug is therefore intended to reassure customers, even if the software remains proprietary and supplied by a US tech giant.

[…]

“Sovereignty is increasingly a requirement, and we welcome any new services, tools, and software that can run in European Cloud Infrastructure Services Providers’ datacenters and on their own platforms. We look forward to testing these products against our forthcoming CISPE Sovereign Cloud Services Framework to see if they qualify for a Sovereign Badge or a Resilient Badge.”

[…]

Microsoft is not the only tech giant concerned about sovereignty. Amazon Web Services made its European Sovereign Cloud generally available earlier this year, and Google is selling customers a variety of solutions, including Google Cloud Airgapped, which runs on servers fully disconnected from the internet.

Whether these efforts satisfy customers will hinge on implementation and on how sovereignty is defined. Being able to disconnect completely will satisfy some, though others may still worry that the software remains under Microsoft’s control.

Enterprises in Europe looking to local tech providers to run their entire stack were given an example of how to do it last week. Plug-and-play it is not, but the rewards are obvious.

Source: Worried Europeans can now cut Azure’s phone cord completely • The Register

Prediction Market Kalshi accuses 2 of insider trading: MrBeast editor and Republican candidate

Quote

An editor who works for YouTube’s biggest creator, MrBeast, has been suspended from the prediction market platform Kalshi and reported to federal regulators for insider trading, Kalshi officials said on Wednesday. It’s the first time the company has publicly revealed the results of an investigation into market manipulation on the popular app.

The MrBeast employee, who Kalshi identified as Artem Kaptur in regulatory filings, traded around $4,000 on markets related to the streamer, the company said.

But Kalshi investigators say Kaptur was using proximity to the streamer as a way of trying to make quick cash. Using confidential information to manipulate markets is prohibited by Kalshi’s rules and could violate federal law.

“We investigated and found that the trader was employed as an editor for the streamer’s show and likely had access to material non-public information connected to his trading,” said Robert DeNault, the company’s head of enforcement.

Kalshi said the company froze the account in question, so Kaptur was not able to withdraw any profits. He was fined $20,000 and suspended from the platform for two years. Kalshi also said the case was reported to regulators at the Commodity Futures Trading Commission, or CFTC, which oversees prediction markets like Kalshi.

[…]

Another trading case involved a former political candidate

Kalshi also unveiled a case against a former longshot Republican candidate in the California governor’s race, Kyle Langford, who posted on X in May that he bet on himself to win the statewide contest. He encouraged others to do the same.

While it appeared to be a social media stunt, it was also a violation of Kalshi’s rules, and regulators said potentially a federal crime.

In a legal notice made public Wednesday, officials at Kalshi said that as a candidate, Langford was “a direct decision maker” for the market on the state’s governor’s race, prohibiting him from betting under internal guidelines against insider trading and market manipulation.

Kalshi banned Langford for five years from its platform and handed him a $2,200 fine.

“As a candidate in a race, you can (and probably should) follow and use Kalshi’s market forecast, but you should not trade on it,” Kalshi’s DeNault said.

[…]

Online prediction market platforms, such as Polymarket and Kalshi, have seen a surge in popularity during Trump’s second term. People can place bets on these platforms on wide-ranging issues such as what words people say at events, the outcome of elections or how much snow will fall in New York City.

The explosive growth of the industry is in part driven by the use of what observers many consider a legal loophole, which the Trump administration supports.

Instead of falling under the purview of state gambling laws, prediction markets are regulated in a more obscure way, as a type of “futures contract,” overseen by CFTC, which typically regulates bets on the future production of things like soybeans, corn and crude oil.

The Biden administration fought prediction market apps from listing most types of contracts. It argued there was little public interest value in most of them, not to mention that they invite speculators to manipulate markets through insider trading.

[…]

Until recently, regulators had allowed a few dozen markets a year for futures trading. Now, there are more than 200,000 active prediction markets.

Prediction markets stirring increased insider trading fears

The burgeoning and controversial industry has run headlong into global affairs. In January, a trader made $400,000 in profit on Polymarket by placing a successful bet on the capture of the Venezuelan leader Nicolás Maduro before there was any public indication that would happen.

Earlier this month, Israeli authorities arrested several people and charged two on suspicion of using classified information to place bets about upcoming military operations in Iran on Polymarket.

Insider trading on Polymarket and Kalshi is prohibited by each platform’s rules, and is illegal under federal law, but experts say each company’s internal systems can only catch so much insider activity, which can take place by word of mouth or other means outside the prediction market apps.

Still, Kalshi says in the past year it has opened 200 investigations into insider trading, 12 of which are still ongoing.

[…]

Source: Kalshi accuses MrBeast editor of insider trading : NPR

The Two Key Villains of 2022’s Crypto Crash are Trying to Rewrite History

Quote

The crypto bubble that inflated through 2021 burst in 2022 with two defining failures.

In May, Terraform Labs’ algorithmic stablecoin UST lost its $1 peg, eventually leading to hyperinflation of the system’s underlying crypto collateral and wiping out an estimated $40 billion in crypto market value. The contagion triggered bankruptcies at a variety of crypto institutions, including Voyager Digital and BlockFi.

Months later, in November, crypto exchange giant FTX halted withdrawals and filed for bankruptcy. Customer funds had allegedly been diverted without consent to cover losses at sister trading firm Alameda Research, fund real estate, political donations, and other unapproved uses. The amount of money that was diverted is somewhat disputed, but what’s clear is that customers were unable to receive requested crypto withdrawals. Bitcoin bottomed below $20,000 amid the broader deleveraging, and reports later pointed to ties between the two crypto disasters.

Justice delivered partial accountability. Do Kwon, Terraform Labs co-founder, pleaded guilty to fraud and manipulation charges tied to misleading investors about UST’s stability. He received a 15-year prison sentence this past December, with victims testifying to the widespread destruction. Sam Bankman-Fried was convicted on seven counts, including wire fraud, securities fraud, and money laundering for the FTX misconduct. A judge sentenced him to 25 years in March 2024 and ordered $11 billion in forfeiture.

Both Bankman-Fried and lawyers associated with Terraform Labs are now working to recast their respective roles in the collapses.

Was FTX Actually Insolvent?

From prison, Bankman-Fried has posted on X claiming FTX was never technically insolvent. In a recent “10 Myths About Me & FTX” thread, he states the platform held more assets than liabilities, could have repaid customers in kind, and is now delivering 119-143% recoveries. He blames bankruptcy professionals for rushing a Chapter 11 filing, charging over $1 billion in fees, and dismantling the estate instead of allowing an orderly wind-down.

Most crypto industry insiders, where Bankman-Fried is viewed as the ultimate villain, dismiss this general argument. If assets were truly sufficient, withdrawals would not have been frozen. New York University Stern School of Business Adjunct Professor Austin Campbell noted that solvency for a crypto exchange means holding customer assets in the exact form and availability they expect, adding, “FTX did not have that. They were insolvent.” Galaxy Head of Firmwide Research Alex Thorn added that diverting deposits into illiquid bets against customers’ wishes amounts to theft, making the platform insolvent the moment redemptions failed.

The bankruptcy process may indeed have carried its own inefficiencies, with creditors flagging excessive legal fees that neared $1 billion and rushed asset sales. However, at the end of the day, misusing customer deposits without approval was still the original sin.

Bankman-Fried has also used his public posts to court a pardon from President Trump. The White House told Fortune this week that no pardon is in the works or planned.

Terraform Labs Blames Insider Traders Instead of Their Broken Stablecoin Model

In the matter of the other major collapse of 2022, Terraform Labs’ liquidation administrator is now suing trading firm Jane Street, alleging insider trading accelerated the UST depeg and LUNA disaster. However, while opportunistic or informed trading may have occurred as the run began, the fundamental issue was the broken stablecoin design. As the pseudonymous crypto advisor and strategist Hasu put it:

Let’s be extremely clear. UST failed because it was a ponzi scheme. It was a criminal enterprise that lured depositors with promise of high yield, paid from the deposits of new entrants. There is no possible universe where it didn’t go broke.

According to the new complaint, Jane Street allegedly obtained non-public information from Terraform insiders through private communication channels established by its employee and former Terraform member Bryce Pratt, who maintained contact with former colleagues, including a software engineer and the head of business development. A specific allegation involves May 7, 2022, when Terraform Labs withdrew 150 million UST from the Curve3pool without any public announcement; within 10 minutes, a wallet linked to Jane Street withdrew an additional 85 million UST from the same pool.

Bitcoin eventually recovered from the 2022 lows and reached new all-time highs near $125,000 in October 2025. But the rest of the crypto market has not followed suit as strongly as in past cycles, where altcoins have routinely outperformed bitcoin by wide margins during bull runs. For example, Ethereum, which was heavily marketed last cycle for DeFi dominance and its shift toward “ultrasound money,” currently trades far lower against bitcoin when compared to previous cycles, underscoring a growing divide between bitcoin and more speculative blockchain use cases.

A few crypto names have outperformed recently, but most exhibit heavy centralization in their associated tech stacks, reliance on centralized stablecoins, or both. Indeed, conversation around non-Bitcoin crypto increasingly centers on stablecoins, which in many ways operate more like centralized fintech products than open protocols. Earlier this week, it was revealed that Meta plans to implement stablecoin integration in their products later this year. Notably, the company previously attempted to create its own digital currency back in 2019 before regulators applied pressure and slowed things down.

Bitcoin has faced its own pressure recently, dropping roughly 50% from the October peak. The drop began with an October 10th deleveraging event driven more by smaller altcoins than bitcoin itself, echoing the post-Terra unwind, according to CNBC. Narratives questioning bitcoin’s “digital gold” status have also resurfaced as physical gold outperformed amid geopolitical strains, including tensions over Greenland. That said, Bitcoin encountered similar doubts after its March 2020 crash at the start of COVID before eventually experiencing another boom during the pandemic.

Source: The Two Key Villains of 2022’s Crypto Crash are Trying to Rewrite History

Same Poop, Different Results: At-Home Gut Health Tests Are Wildly Inconsistent

Quote

The bacteria that live inside our digestive tract undoubtedly play a vital part in our health. But buyer beware of companies that claim to have deciphered the gut microbiome. Research out today shows that no two at-home tests will tell you the same thing.

Government scientists sent standardized fecal samples to seven different gut health testing companies. The companies returned results that varied from one another, sometimes dramatically, while one company’s tests couldn’t conclusively decide if the same samples belonged to a healthy microbiome or not. The findings indicate that customers shouldn’t put too much stock in these tests, at least right now, the researchers say.

“Our results demonstrate the need for standards to ensure analytical validity and consumer confidence,” the authors wrote in their paper, published Thursday in Communications Biology.

Not quite there yet

Exciting as the field of gut health is, it’s very much in its infancy. We’re still not quite sure exactly what makes for a healthy mix of bacteria in our guts, much less how to reliably fix an unhealthy microbiome (it’s likely there are many different combinations of bacteria that could be “healthy”). And we’re still trying to untangle the complex interactions between our gut bacteria and various health conditions.

This uncertainty hasn’t stopped several companies from entering the direct-to-consumer industry, however. While some may be cautious in their advertising, others have claimed their tests can tell whether a person’s microbiome is healthy, and they might even sell products that will supposedly restore a dysfunctional one. Many scientists have already called for tighter regulation of these tests. Researchers at the National Institute of Standards and Technology, a division of the U.S. Department of Commerce, and others sought to gauge the reliability of these tests across different companies.

[…]

Source: Same Poop, Different Results: At-Home Gut Health Tests Are Wildly Inconsistent, Study Finds

Open Source Endowment aims to raise big pile of money

Quote

Open source projects, ever short of funding, have a potential new source of revenue in the form of the Open Source Endowment (OSE).

The organization describes itself as “the world’s first endowment fund for open source software.”

There are certainly other organizations that help fund open source software, such as Open Collective, Open Source Collective, and the Rust Foundation’s Maintainers Fund, not to mention organizations like the Software Freedom Conservancy, which provides legal and infrastructure support to open source projects. Open source developers may also be fortunate enough to receive contributions from individuals, companies (when not passing the buck), and government-sponsored initiatives like Germany’s Sovereign Tech Fund.

But OSE aspires specifically to build a big pile of cash – an endowment – that it will dole out to deserving open source projects.

It’s certainly needed. In 2023, Denis Pushkarev, maintainer of the widely used core-js library, vented his frustration with the fact that users of his software seldom offer financial support. “Free open source software is fundamentally broken,” he said.

The year before that, Christofer Dutz – creator of Apache PLC4X – lamented uncompensated use of his software. Earlier in 2022, Google talked up the need to support critical open source infrastructure, citing the log4j vulnerability.

But concerns about the sustainability of open source go back further still. Two years after the 2014 Heartbleed vulnerability – a dangerous flaw in OpenSSL – a Ford Foundation report noted that the OpenSSL project is critical internet infrastructure yet had just one full-time maintainer and earned less than $2,000 per year in donations.

As OSE points out, 95 percent of codebases rely on open source software, each of which has an average of 500 open source components. And yet 86 percent of open source contributors receive no payment for their work.

OSE founding chairman Konstantin Vinogradov, a venture capital investor, previously said he wanted to replicate the funding model that has sustained universities.

And he reiterated that aspiration in a Hacker News post announcing OSE.

Universities and the open source community, he argues, share reputation-based culture and functions, working together to create valuable ideas for the benefit of the public, educating each other, and commercializing only a portion of what’s produced.

“For universities, humanity has just two sustainable funding models: public spending or private endowments,” Vinogradov explained. “Government support won’t work for OSS at scale – it’s too globally decentralized. And yet nobody had built an OSS-focused endowment before. After understanding why, I started building one together with other OSS folks.”

Vinogradov said the OSE, a US 501(c)(3) tax-exempt charity, aims to make open source development more sustainable through a community-driven endowment. Donations will be invested and only investment income will be disbursed through grants – the principal funds will remain invested in the hope of growth.

Presently, the fund stands at around $700,000, thanks to contributions from more than 60 founding donors, including the founders of ClickHouse, curl, Elastic, Gatsby, HashiCorp, n8n, Nginx, Pydantic, Supabase, and Vue.js.

Donations go directly to the fund, and those who give over $1,000 can become OSE Members, which includes certain rights to participate in OSE governance.

The group has detailed its grant selection process on the OSE website and in its GitHub repository.

According to Vinogradov, “OSE won’t give money for commercial product development – it is dedicated to supporting existing highly-used nonprofit and independent OSS.”

Source: Open Source Endowment aims to raise big pile of money • The Register

Common Corpus, an open training set for AI, goes global – and so should support for it – Walled Culture

Quote

As many of the AI stories on Walled Culture attest, one of the most contentious areas in the latest stage of AI development concerns the sourcing of training data. To create high-quality large language models (LLMs) massive quantities of training data are required. In the current genAI stampede, many companies are simply scraping everything they can off the Internet. Quite how that will work out in legal terms is not yet clear. Although a few court cases involving the use of copyright material for training have been decided, many have not, and the detailed contours of the legal landscape remain uncertain.

However, there is an alternative to this “grab it all” approach. It involves using materials that are either in the public domain or released under a “permissive” licence that allows LLMs to be trained on them without any problems. There’s plenty of such material online, but its scattered nature puts it at a serious disadvantage compared to downloading everything without worrying about licensing issues. To address that, the Common Corpus was created and released just over a year ago by the French startup Pleias. A press release from the AI Alliance explains the key characteristics of the Common Corpus:

Truly Open: contains only data that is permissively licensed and provenance is documented

Multilingual: mostly representing English and French data, but contains at least 1[billion] tokens for over 30 languages

Diverse: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers

Extensively Curated: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed.

There are five main categories of material: OpenGovernment, OpenCulture, OpenScience, OpenWeb, and OpenSource:

OpenGovernment contains Finance Commons, a dataset of financial documents from a range of governmental and regulatory bodies. Finance Commons is a multimodal dataset, including both text and PDF corpora. OpenGovernment also contains Legal Commons, a dataset of legal and administrative texts. OpenCulture contains cultural heritage data like books and newspapers. Many of these texts come from the 18th and 19th centuries, or even earlier.

OpenScience data primarily comes from publicly available academic and scientific publications, which are most often released as PDFs. OpenWeb contains datasets from YouTube Commons, a dataset of transcripts from public domain YouTube videos, and websites like Stack Exchange. Finally, OpenSource comprises code collected from GitHub repositories which were permissibly licensed.

The initial release contained over 2 trillion tokens – the usual way of measuring the volume of training material, where tokens can be whole words and parts of words. A significant recent update of the corpus has taken that to over 2.267 trillion tokens. Just as important as the greater size, is the wider reach: there are major additions of material from China, Japan, Korea, Brazil, India, Africa and South-East Asia. Specifically, the latest release contains data for eight languages with more than 10 billion tokens (English, French, German, Spanish, Italian, Polish, Greek, Latin) and 33 languages with more than 1 billion tokens. Because of the way the dataset has been selected and curated, it is possible to train LLMs on fully open data, which leads to auditable models. Moreover, as the original press release explains:

By providing clear provenance and using permissibly licensed data, Common Corpus exceeds the requirements of even the strictest regulations on AI training data, such as the EU AI Act. Pleias has also taken extensive steps to ensure GDPR compliance, by developing custom procedures to enable personally identifiable information (PII) removal for multilingual data. This makes Common Corpus an ideal foundation for secure, enterprise-grade models. Models trained on Common Corpus will be resilient to an increasingly regulated industry.

Another advantage for many users is that material with high “toxicity scores” has already been removed, thus ensuring that any LLMs trained on the Common Corpus will have fewer problems in this regard.

The Common Corpus is a great demonstration of the power of openness and permissive copyright licensing, and how they bring benefits that other approaches can’t match. For example: “Common Corpus makes it possible to train models compatible with the Open Source Initiative’s definition of open-source AI, which includes openness of use, meaning use is permitted for ‘any purpose and without having to ask for permission’. ” That fact, along with the multilingual nature of the Common Corpus, would make the latest version a great fit for any EU move to create “public AI” systems, something advocated on this blog a few months back. The French government is already backing the project, as are other organisations supporting openness:

The Corpus was built up with the support and concerted efforts of the AI Alliance, the French Ministry of Culture as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).

This dataset was also made in partnership with Wikimedia Enterprise and Wikidata/Wikimedia Germany. We’re also thankful to our partner Libraries Without Borders for continuous assistance on extending low resource language support.

The corpus was stored and processed with the generous support of the AI Alliance, Jean Zay (Eviden, Idris), Tracto AI, Mozilla.

The unique advantages of the Common Corpus mean that more governments should be supporting it as an alternative to proprietary systems, which generally remain black boxes in terms of where their training data comes from. Publishers too would also be wise to fund it, since it offers a powerful resource explicitly designed to avoid some of the thorniest copyright issues plaguing the generative AI field today.

Source: Common Corpus, an open training set for AI, goes global – and so should support for it – Walled Culture