Amsterdam, Rome, Paris, 23 October 2025 – Airbus (stock exchange symbol: AIR), Leonardo (Borsa Italiana: LDO) and Thales (Euronext Paris: HO) have signed a Memorandum of Understanding (“MoU”) aimed at combining their respective space activities into a new company.
By joining forces, Airbus, Leonardo and Thales aim to strengthen Europe’s strategic autonomy in space, a major sector that underpins critical infrastructure and services related to telecommunications, global navigation, earth observation, science, exploration and national security. This new company also intends to serve as the trusted partner for developing and implementing national sovereign space programmes.
This new company will pool, build and develop a comprehensive portfolio of complementary technologies and end-to-end solutions, from space infrastructure to services (excluding space launchers). It will accelerate innovation in this strategic market, in order to create a unified, integrated and resilient European space player, with the critical mass to compete globally and grow on the export markets.
[…]
Airbus will contribute with its Space Systems and Space Digital businesses, coming from Airbus Defence and Space.
Leonardo will contribute with its Space Division, including its shares in Telespazio and Thales Alenia Space.
Thales will mainly contribute with its shares in Thales Alenia Space, Telespazio, and Thales SESO.
The combined entity will employ around 25,000 people across Europe. With an annual turnover of about 6.5bn€ (end of 2024, pro-forma) and an order backlog representing more than three years of projected sales, this new company will form a robust, innovative and competitive entity worldwide.
Ownership of the new company will be shared among the parent companies, with Airbus, Leonardo and Thales owning respectively 35%, 32.5% and 32.5% stakes. It will operate under joint control, with a balanced governance structure among shareholders.
Apple could face claims estimated at around £1.5 billion after it lost a collective case in the UK arguing that its closed systems for apps resulted in overcharging businesses and consumers.
The ruling from a Competition Appeal Tribunal responded to the case brought on behalf of 36 million UK iPhone and iPad users, both consumers and enterprise customers.
Apple said it disagreed with the ruling [PDF] and planned to appeal.
The court found Apple had imposed charges for its iOS app distribution services and its in-app payment service charged developers a headline commission rate of 30 percent.
In a unanimous judgment, the court found Apple overcharged developers as a result of its behavior in the iOS app distribution services market and the iOS in-app payment services market. There was also an overcharge resulting from the extent to which developers passed on the costs to iPhone and iPad users.
The court found those represented in the case, led by academic Dr Rachael Kent, could be eligible for 8 percent interest on damages awarded.
Speaking to the BBC, Kent said the decision was a “landmark victory, not only for App Store users, but for anyone who has ever felt powerless against a global tech giant.”
In a statement, Apple said the ruling’s view of its software marketplace was mistaken. It argued the App Store was good for UK businesses and consumers because it offered a space for developers to sell their work and somewhere users could choose from millions of software products.
“This ruling overlooks how the App Store helps developers succeed and gives consumers a safe, trusted place to discover apps and securely make payments. The App Store faces vigorous competition from many other platforms – often with far fewer privacy and security protections,” the tech giant said.
Which is quite funny for Apple to say, because it fights tooth and nail to ensure that there is no competition for the App Store. Even when the EU tells Apple it must enable alternate app stores or payment providers, it rolls around the floor like a child in a tantrum hoping to avoid the inevitable:
The feds on Thursday charged alleged mafia associates and current and former National Basketball Association players and coaches with running rigged poker games and illegal sports betting.
Starting around 2019, a group of alleged mafia associates began operating a high-stakes poker con at several locations around Manhattan, according to an indictment filed by the US Attorney for the Eastern District of New York. The card cheating scheme relied on X-ray tables, rigged card shufflers, and glasses capable of reading hidden card markings.
Authorities say they arrested 31 individuals across 11 states, including members and associates of the Bonanno, Gambino, and Genovese organized crime families of La Cosa Nostra.
Chauncey Billups, the head coach of the Portland Trail Blazers, and former Cleveland Cavaliers player and assistant coach Damon Jones were also arrested.
Billups’ attorney Chris Heywood told ESPN in a statement that his client did not do what the government claims and that Billups intends to fight the charges.
For years, these individuals allegedly hosted illegal poker games where they used sophisticated technology and enlisted current and former NBA players to cheat people out of millions of dollars
“For years, these individuals allegedly hosted illegal poker games where they used sophisticated technology and enlisted current and former NBA players to cheat people out of millions of dollars,” said NYPD Commissioner Jessica S. Tisch in a statement.
“This complex scheme was so far reaching that it included members from four of the organized crime families, and when people refused to pay because they were cheated, these defendants did what organized crime has always done: they used threats, intimidation, and violence.”
As described in the indictment, the victimized card players believed they were participating in fair but illegal poker games against other players. However, the games were rigged, resulting in a loss of at least $7 million since the scheme’s inception. The NBA celebrities supposedly served as “Face Cards” to attract players.
“The defendants and their co-conspirators, who constituted the remaining participants purportedly playing in the poker games, worked together on cheating teams … that used advanced wireless technologies to read the cards dealt in each poker hand and relay that information to the defendants and co-conspirators participating in the illegal poker games,” the indictment claims.
The cheating scheme allegedly employed compromised shuffling machines that could read the cards in the deck and transmit this information to an off-site relayer who messaged the details back to a player at the table, referred to as the “Quarterback” or “Driver.” This individual then used prearranged signals to communicate with co-conspirators at the table, all to win poker games against unsuspecting victims.
The defendants also allegedly employed “a chip tray analyzer (essentially, a poker chip tray that also secretly read all cards using hidden cameras), an X-ray table that could read cards face down on the table, and special contact lenses or eyeglasses that could read pre-marked cards.”
[…]
Online poker games have long presented a risk of cheating and player collusion, but this incident reaffirms that in-person games, where collusion has always been a possibility, can also be subverted through technology.
“I think the sophistication in the cheating technologies is far greater than the sophistication in detection, and it’s not very common for people to even have expensive detection technology,” said Rubin. “You’re not, as a player, equipped to compete in a way with the people that have the resources to cheat like that.”
Major Las Vegas casinos like the MGM Grand or Caesars Palace, Rubin said, put a lot of money and effort into protecting games at their facilities and have an interest in preventing cheating scandals from tarnishing their brands. “You’re probably safe playing in big, brand name casinos,” he said. “But at the end of the day, you know, it’s poker and if somebody wants to try hard enough and spends money to do it, they may find a way to cheat.
[…]
The second of the two indictments alleged that six defendants, including Miami Heat guard Terry Rozier and former NBA assistant coach and player Damon Jones (named in the first indictment), colluded to share inside information and to alter in-game behavior to influence the outcome of bets on NBA games.
New NATO member Sweden is boosting support to Ukraine, with a letter of intent signed this week on the sale of up to 150 Gripen fighter jets. Shortly after joining NATO in March 2024 and bringing an end to two centuries of military non-alignment, Sweden approved a €989 million military support package that included Archer self-propelled artillery systems and long-range drones.
Its latest contribution to the war effort is Glimt, an innovative project launched by the Swedish Defence Research Agency (FOI) earlier this year. Glimt is an open platform that relies on the theory of “crowd forecasting”: a method of making predictions based on surveying a large and diverse group of people and taking an average. “Glimt” is a Swedish word for “a glimpse” or “a sudden insight”. The theory posits that the average of all collected predictions produces correct results with “uncanny accuracy”, according to the Glimt website. Such “collective intelligence” is used today for everything from election results to extreme weather events, Glimt said.
[…]
Group forecasting allows for a broad collection of information while avoiding the cognitive bias that often characterises intelligence services. Each forecaster collects and analyses the available information differently to reach the most probable scenario and can add a short comment to explain their reasoning. The platform also encourages discussion between members so they can compare arguments and alter their positions.
Available in Swedish, French and English, the platform currently has 20,000 registered users; each question attracts an average of 500 forecasters. Their predictions are later sent to statistical algorithms that cross-reference data, particularly the relevance of the answers they provided. The most reliable users will have a stronger influence on the results; this reinforces the reliability of collective intelligence.
When the microcomputer first landed in homes some forty years ago, it came with a simple freedom—you could run whatever software you could get your hands on. Floppy disk from a friend? Pop it in. Shareware demo downloaded from a BBS? Go ahead! Dodgy code you wrote yourself at 2 AM? Absolutely. The computer you bought was yours. It would run whatever you told it to run, and ask no questions.
Today, that freedom is dying. What’s worse, is it’s happening so gradually that most people haven’t noticed we’re already halfway into the coffin.
News? Pegged.
There are always security risks when running code from untrusted sources. The stakes are higher these days when our computers are the gateways to our personal and financial lives.
The latest broadside fired in the war against platform freedom has been fired. Google recently announced new upcoming restrictions on APK installations. Starting in 2026, Google will tightening the screws on sideloading, making it increasingly difficult to install applications that haven’t been blessed by the Play Store’s approval process. It’s being sold as a security measure, but it will make it far more difficult for users to run apps outside the official ecosystem. There is a security argument to be made, of course, because suspect code can cause all kinds of havoc on a device loaded with a user’s personal data. At the same time, security concerns have a funny way of aligning perfectly with ulterior corporate motives.
It’s a change in tack for Google, which has always had the more permissive approach to its smartphone platform. Contrast it to Apple, which has sold the iPhone as a fully locked-down device since day one. The former company said that if you own your phone, you could do what you want with it. Now, it seems Google is changing its mind ever so slightly about that. There will still be workarounds, like signing up as an Android developer and giving all your personal ID to Google, but it’s a loss to freedom whichever way you look at it.
Beginnings
Sony put a great deal of engineering into the PlayStation to ensure it would only read Sony-approved discs. Modchips sprung up as a way to get around that problem, albeit primarily so owners could play cheaper pirated games. Credit: Libreleah, CC BY-SA 4.0,
The walled garden concept didn’t start with smartphones. Indeed, video game consoles were a bit of a trailblazer in this space, with manufacturers taking this approach decades ago. The moment gaming became genuinely profitable, console manufacturers realized they could control their entire ecosystem. Proprietary formats, region systems, and lockout chips were all valid ways to ensure companies could levy hefty licensing fees from developers. They locked down their hardware tighter than a bank vault, and they did it for one simple reason—money. As long as the manufacturer could ensure the console wouldn’t run unapproved games, developers would have to give them a kickback for every unit sold.
By and large, the market accepted this. Consoles were single-purpose entertainment machines. Nobody expected to run their own software on a Nintendo, after all. The deal was simple—you bought a console from whichever company, and it would only play whatever they said was okay. The vast majority of consumers didn’t care about the specifics. As long as the console in question had a decent library, few would complain.
Nintendo created the 10NES copy protection system to ensure its systems would only play games approved by the company itself, in an attempt to exert quality control after the 1983 North American video game crash. Credit: Evan-Amos, public domain
There was always an underground—adapters to work around region locks, and bootleg games that relied on various hacks—with varying popularity over the years. Often, it was high prices that drove this innovation—think of the many PlayStation mod chips sold to play games off burnt CDs to avoid paying retail.
At the time, this approach largely stayed within the console gaming world. It didn’t spread to actual computers because computers were tools. You didn’t buy a PC to consume content someone else curated for you. You bought it to do whatever you wanted—write a novel, make a spreadsheet, play games, create music, or waste time on weird hobby projects. The openness wasn’t a bug, or even something anybody really thought about. It was just how computers were. It wasn’t just a PC thing, either—every computer on the market let you run what you wanted! It wasn’t just desktops and laptops, either; the nascent tablets and PDAs of the 1990s operated in just the same way.
Then came the iPhone, and with it, the App Store. Apple took the locked-down model and applied it to a computer you carry in your pocket. The promise was that you’d only get apps that were approved by Apple, with the implicit guarantee of a certain level of quality and functionality.
Apple is credited with pioneering the modern smartphone, and in turn, the walled garden that is the App Store. Credit: Apple
It was a bold move, and one that raised eyebrows among developers and technology commentators. But it worked. Consumers loved having access to a library of clean and functional apps, built right into the device. Meanwhile, they didn’t really care that they couldn’t run whatever kooky app some random on the Internet had dreamed up.
Apple sold the walled garden as a feature. It wasn’t ashamed or hiding the fact—it was proud of it. It promised apps with no viruses and no risks; a place where everything was curated and safe. The iPhone’s locked-down nature wasn’t a restriction; it was a selling point.
But it also meant Apple controlled everything. Every app paid Apple’s tax, and every update needed Apple’s permission. You couldn’t run software Apple didn’t approve, full stop. You might have paid for the device in your pocket, but you had no right to run what you wanted on it. Someone in Cupertino had the final say over that, not you.
When Android arrived on the scene, it offered the complete opposite concept to Apple’s control. It was open source, and based on Linux. You could load your own apps, install your own ROMs and even get root access to your device if you wanted. For a certain kind of user, that was appealing. Android would still offer an application catalogue of its own, curated by Google, but there was nothing stopping you just downloading other apps off the web, or running your own code.
Sadly, over the years, Android has been steadily walking back that openness. The justifications are always reasonable on their face. Security updates need to be mandatory because users are terrible at remembering to update. Sideloading apps need to come with warnings because users will absolutely install malware if you let them just click a button. Root access is too dangerous because it puts the security of the whole system and other apps at risk. But inch by inch, it gets harder to run what you want on the device you paid for.
Windows Watches and Waits
The walled garden has since become a contagion, with platforms outside the smartphone space considering the tantalizing possibilities of locking down. Microsoft has been testing the waters with the Microsoft Store for years now, with mixed results. Windows 10 tried to push it, and Windows 11 is trying harder. The store apps are supposedly more secure, sandboxed, easier to manage, and straightforward to install with the click of a button.
Microsoft has tried multiple times to sell versions of Windows that are locked to exclusively run apps from the Microsoft Store. Thus far, these attempts have been commercial failures.
Microsoft hasn’t pulled the trigger on fully locking down Windows. It’s flirted with the idea, but has seen little success. Windows RT and Windows 10 S were both locked to only run software signed by Microsoft—each found few takers. Desktop Windows remains stubbornly open, capable of running whatever executable you throw at it, even if it throws up a few more dialog boxes and question marks with every installer you run these days.
How long can this last? One hopes a great while yet. A great deal of users still expect a computer—a proper one, like a laptop or desktop—to run whatever mad thing they tell it to. However, there is an increasing userbase whose first experience of computing was in these locked-down tablet and smartphone environments. They aren’t so demanding about little things like proper filesystem access or the ability to run unsigned code. They might not blink if that goes away.
For now, desktop computing has the benefit of decades of tradition built in to it. Professional software, development tools, and specialized applications all depend on the ability to install whatever you need. Locking that down would break too many workflows for too many important customers. Masses of scientific users would flee to Linux the moment their obscure datalogger software couldn’t afford an official license to run on Windows;. Industrial users would baulk at having to rely on a clumsy Microsoft application store when bringing up new production lines.
Apple had the benefit that it was launching a new platform with the iPhone; one for which there were minimal expectations. In comparison, Microsoft would be climbing an almighty mountain to make the same move on the PC, where the culture is already so established. Apple could theoretically make moves in that direction with OS X and people would be perhaps less surprised, but it would still be company making a major shift when it comes to customer expectations of the product.
Here’s what bothers me most: we’re losing the idea that you can just try things with computers. That you can experiment. That you can learn by doing. That you can take a risk on some weird little program someone made in their spare time. All that goes away with the walled garden. Your neighbour can’t just whip up some fun gadget and share it with you without signing up for an SDK and paying developer fees. Your obscure game community can’t just write mods and share content because everything’s locked down. So much creativity gets squashed before it even hits the drawing board because it’s just not feasible to do it.
It’s hard to know how to fight this battle. So much ground has been lost already, and big companies are reluctant to listen to the esoteric wishers of the hackers and makers that actually care about the freedom to squirt whatever through their own CPUs. Ultimately, though, you can still vote with your wallet. Don’t let Personal Computing become Consumer Computing, where you’re only allowed to run code that paid the corporate toll. Make sure the computers you’re paying for are doing what you want, not just what the executives approved of for their own gain. It’s your computer, it should run what you want it to!
[…] “Dietary modifications could be a new, natural and cost-effective approach to achieve better sleep,[ …]
Previous studies have shown that getting too little sleep can drive people toward unhealthier eating patterns, often higher in fat and sugar. Yet, despite how sleep influences well-being and productivity, scientists have known far less about the reverse — how diet affects sleep itself.
While earlier research linked greater fruit and vegetable intake with people reporting better sleep, this study was the first to show a same-day relationship between diet and objectively measured sleep quality.
[…]
The scientists analyzed a measure called “sleep fragmentation,” which captures how often a person wakes up or shifts between lighter and deeper stages of sleep during the night.
What the Researchers Found
The results showed that daily eating habits were strongly connected to how well participants slept that night. Those who ate more fruits and vegetables — and consumed more complex carbohydrates such as whole grains — experienced longer periods of deep, undisturbed sleep.
According to the team’s analysis, people who met the CDC recommendation of five cups of fruits and vegetables per day could see an average 16 percent improvement in sleep quality compared with those who ate none.
“16 percent is a highly significant difference,” Tasali said. “It’s remarkable that such a meaningful change could be observed within less than 24 hours.”
[…]
Story Source:
Materials provided by University of Chicago Medical Center. Note: Content may be edited for style and length.
Journal Reference:
Hedda L. Boege, Katherine D. Wilson, Jennifer M. Kilkus, Waveley Qiu, Bin Cheng, Kristen E. Wroblewski, Becky Tucker, Esra Tasali, Marie-Pierre St-Onge. Higher daytime intake of fruits and vegetables predicts less disrupted nighttime sleep in younger adults. Sleep Health, 2025; 11 (5): 590 DOI: 10.1016/j.sleh.2025.05.003
Researchers at Trinity College Dublin have uncovered what they call a “universal thermal performance curve” (UTPC), a pattern that appears to apply to every living species on Earth. This curve describes how organisms respond to changes in temperature, and it seems to hold true across the entire spectrum of life. According to the scientists, the UTPC effectively “shackles evolution” because no species appears capable of escaping its influence on how temperature affects biological performance.
[…]
Rising Heat and Falling Performance
The study revealed a consistent trend in how organisms respond to warmth:
Performance increases gradually as temperature rises until reaching a peak (the optimum point).
Beyond this optimum, performance drops sharply.
When temperatures climb too high, overheating can cause physiological breakdown or death.
These findings, published in the journal PNAS, suggest that species may face greater limits than previously thought when adapting to global climate change. As most regions continue to warm, the window of viable performance for many species could shrink.
One Curve, Many Temperatures
Andrew Jackson, Professor in Zoology in Trinity’s School of Natural Sciences, and co-author,said: “Across thousands of species and almost all groups of life including bacteria, plants, reptiles, fish and insects, the shape of the curve that describes how performance changes with temperature is very similar. However, different species have very different optimal temperatures, ranging from 5oC to 100oC, and their performance can vary a lot depending on the measure of performance being observed and the species in question.”
“That has led to countless variations on models being proposed to explain these differences. What we have shown here is that all the different curves are in fact the same exact curve, just stretched and shifted over different temperatures. And what’s more, we have shown that the optimal temperature and the critical maximum temperature at which death occurs are inextricably linked.”
“Whatever the species, it simply must have a smaller temperature range at which life is viable once temperatures shift above the optimum.”
[…]
Searching for the Exceptions
“The next step is to use this model as something of a benchmark to see if there are any species or systems we can find that may, subtly, break away from this pattern. If we find any, we will be excited to ask why and how they do it — especially given forecasts of how our climate is likely to keep warming in the next decades.”
Story Source:
Materials provided by Trinity College Dublin. Note: Content may be edited for style and length.
Journal Reference:
Jean-François Arnoldi, Andrew L. Jackson, Ignacio Peralta-Maraver, Nicholas L. Payne. A universal thermal performance curve arises in biology and ecology. Proceedings of the National Academy of Sciences, 2025; 122 (43) DOI: 10.1073/pnas.2513099122
Networking researcher Christoff Visser has found that Apple devices cause Wi-Fi networks to “jitter” due to traffic generated by the Apple Wireless Direct Link (AWDL) tech that powers the peer-to-peer AirDrop filesharing tool.
Visser presented his findings on Tuesday at the RIPE 91 conference, the biannual internetworking event organized by RIPE NCC, the regional internet registry for Europe, the Middle East and parts of Central Asia. In his talk, titled “Apple Wireless Direct Link: Apple’s Network Magic or Misery,” Visser explained that while using a new iPad he often encountered what he described as “very strange rhythmic stuttering” as he streamed audio to the device.
He used the Moonlight streaming test tool to investigate and found 20 millisecond latency, but with a 25 millisecond variance he felt was oddly high for the uncontested environment that is a local network. He next used Steam’s network testing tool, and found latency regularly bounced between three and 90 milliseconds. PING commands produced similar results, as did tests on different devices.
At this point, Visser felt confident his hardware and applications were not the reason for his streams stuttering.
Visser, who works at Japan’s IIJ Research Lab, dug into the situation and found AWDL constantly listens for requests to use AirDrop, and prefers to use certain “social” Wi-Fi channels – channel 6 for 2.4 GHz networks channels 44 and 149 for 5 GHz Wi-Fi.
As a networking engineer, Visser chose to use empty channels.
“It’s a big mistake,” he told the conference. “What ends up happening is that if you are not in one of these social channels, you get this periodic Wi-Fi channel swapping where it goes to the social channel, listens in [if] anybody wants to talk to it and swaps back to create very rhythmic stuttering.”
Visser suggested one way to avoid the issue is not to use AWDL but acknowledged that doing so means users of Apple devices will have to do without AirDrop and other Cupertino tricks like using an iPad as an external monitor for a Mac or mirroring an iPhone screen.
He doesn’t think cutting users off from those services is practical.
“There’s approximately over 1.5 billion other iPhone users in the world and are you really going to tell your users in your network ‘Don’t use the features on these Apple devices’. It’s not really a solution.
“The other option is to do the Apple way of networking, so for the best experience you use the same Wi-Fi channels as everybody else, or you will suffer from jitter at some point.”
He ended his talk by expressing his concerns about Apple’s ecosystem.
“There’s a lot of convenience, as I described,” he said. “The question is really: Is this convenience worth disruption?”
His answer was “For most things sure, it doesn’t matter too much.”
But he feels it will matter to more people in future.
“Cloud gaming and remote gaming is growing bigger and bigger and they are trying to push high fidelity, bigger bit rate, if you are trying to do 4k HDR at 120 FPS, yes you are going to start to feel these delays and packet loss more and more.”
“It makes me uncomfortable because it really promotes bad network practices like not using the best channels to actually improve your end user experience,” he added.
He therefore grudgingly recommended using the Wi-Fi channels Apple uses, and expressed his hope that any folks from ISPs in the audience can learn from his experience so that if their customers experience network jitters they now have an explanation.
Data-center developers are running into a severe power bottleneck as they rush to build bigger facilities to capitalize on generative AI’s potential. Normally, they would power these centers by connecting to the grid or building a power plant onsite. However, they face major delays in either securing gas turbines or in obtaining energy from the grid.
At the Data Center World Power show in San Antonio in October, natural-gas power provider ProEnergy revealed an alternative—repurposed aviation engines. According to Landon Tessmer, vice president of commercial operations at ProEnergy, some data centers are using his company’s PE6000 gas turbines to provide the power needed during the data center’s construction and during its first few years of operation. When grid power is available, these machines either revert to a backup role, supplement the grid, or are sold to the local utility.
“We have sold 21 gas turbines for two data-center projects amounting to more than 1 gigawatt,” says Tessmer. “Both projects are expected to provide bridging power for five to seven years, which is when they expect to have grid interconnection and no longer need permanent behind-the-meter generation.”
[…]
It is a common and long-established practice for gas-turbine original equipment manufacturers (OEMs) like GE Vernova and Siemens Energy to convert a successful aircraft engine for stationary electric-power generation applications. Known as aeroderivative gas turbines[…] “It takes a lot to industrialize an aviation engine and make it generate power,” […] To make it suitable for power generation, it needed an expanded turbine section to convert engine thrust into shaft power, a series of struts and supports to mount it on a concrete deck or steel frame, and new controls. Further modifications typically include the development of fuel nozzles that let the machine run on natural gas rather than aviation fuel, and a combustor that minimizes the emission of nitrogen oxides, a major pollutant.
[…]
ProEnergy buys and overhauls used CF6-80C2 engine cores—the central part of the engine where combustion occurs—and matches them with newly manufactured aeroderivative parts made either by ProEnergy or its partners. After assembly and testing, these refurbished engines are ready for a second life in electric-power generation, where they provide 48 megawatts, enough to power a small-to-medium data center (or a town of perhaps 20,000 to 40,000 households). According to Tessmer, approximately 1,000 of these aircraft engines are expected to be retired over the next decade, so there’s no shortage of them. A large data center may have demand that exceeds 100 MW, and some of the latest data centers being designed for AI are more than 1 GW.
[…]
ProEnergy sells two-turbine blocks with the standard configuration. It consists of gas turbines, generators, and a host of other gear, such as systems to cool the air entering the turbine during hot days as a way to boost performance, selective catalytic reduction systems to reduce emissions, and various electrical systems.
[…] The Milky Way is anything but static. It rotates and it wobbles, and new observations from the European Space Agency’s Gaia space telescope now reveal another motion, a giant wave moving outward from the galaxy’s centre.
For roughly a century, astronomers have known that stars orbit the galactic centre, and Gaia has mapped their speeds and paths. Since the 1950s, researchers have recognized that the Milky Way’s disc is warped. In 2020, Gaia showed that this disc also wobbles over time, similar to a spinning top.
It is now clear that a vast ripple influences stellar motions across distances of tens of thousands of light-years from the Sun. Like waves spreading from a stone dropped into a pond, this stellar ripple spans a large stretch of the Milky Way’s outer disc.
The European Space Agency’s (ESA) Gaia space telescope has revealed that our Milky Way galaxy has a giant wave rippling outwards from its center. In the left image, we look at our galaxy from ‘above’. On the right, we see across a vertical slice of the galaxy and look at the wave side-on. In this perspective, the Sun is located between the line of sight and the bulge of the galaxy. This perspective also reveals that the ‘left’ side of the galaxy curves upward and the other side curves downward (this is the warp of the disc). The newly discovered wave is indicated in red and blue: in red areas, the stars lie above, and in blue areas the stars lie below the warped disc of the galaxy. Credit: ESA/Gaia/DPAC, S. Payne-Wardenaar, E. Poggio et al (2025)
The unexpected galactic ripple is illustrated in this figure above. Here, the positions of thousands of bright stars are shown in red and blue, overlaid on Gaia’s maps of the Milky Way.
In the left image, we look at our galaxy from ‘above’. On the right, we see across a vertical slice of the galaxy and look at the wave side-on. This perspective reveals that the ‘left’ side of the galaxy curves upward and the ‘right’ side curves downward (this is the warp of the disc). The newly discovered wave is indicated in red and blue: in red areas, the stars lie above, and in blue areas, the stars lie below the warped disc of the galaxy.
[…]
The Scale of the Wave
From these, we can see that the wave stretches over a huge portion of the galactic disc, affecting stars around at least 30–65 thousand light-years away from the center of the galaxy (for comparison, the Milky Way is around 100 thousand light-years across).
The great wave could also be related to a smaller-scale rippling motion seen 500 light-years from the Sun and extending over 9000 light-years, the so-called Radcliffe Wave.
“However, the Radcliffe Wave is a much smaller filament, and located in a different portion of the galaxy’s disc compared to the wave studied in our work (much closer to the Sun than the great wave). The two waves may or may not be related. That’s why we would like to do more research,” Eloisa adds.
“The upcoming fourth data release from Gaia will include even better positions and motions for Milky Way stars, including variable stars like Cepheids. This will help scientists to make even better maps, and thereby advance our understanding of these characteristic features in our home galaxy,” says Johannes Sahlmann, ESA’s Gaia Project Scientist.
Reference: “The great wave – Evidence of a large-scale vertical corrugation propagating outwards in the Galactic disc” by E. Poggio, S. Khanna, R. Drimmel, E. Zari, E. D’Onghia, M. G. Lattanzi, P. A. Palicio, A. Recio-Blanco and L. Thulasidharan, 14 July 2025, Astronomy & Astrophysics. DOI: 10.1051/0004-6361/202451668
Geographic atrophy due to age-related macular degeneration (AMD) is the leading cause of irreversible blindness and affects more than 5 million persons worldwide. No therapies to restore vision in such persons currently exist. The photovoltaic retina implant microarray (PRIMA) system combines a subretinal photovoltaic implant and glasses that project near-infrared light to the implant in order to restore sight to areas of central retinal atrophy.
Methods
We conducted an open-label, multicenter, prospective, single-group, baseline-controlled clinical study in which the vision of participants with geographic atrophy and a visual acuity of at least 1.2 logMAR (logarithm of the minimum angle of resolution) was assessed with PRIMA glasses and without PRIMA glasses at 6 and 12 months. The primary end points were a clinically meaningful improvement in visual acuity (defined as ≥0.2 logMAR) from baseline to month 12 after implantation and the number and severity of serious adverse events related to the procedure or device through month 12.
Results
A total of 38 participants received a PRIMA implant, of whom 32 were assessed at 12 months. Of the 6 participants who were not assessed, 3 had died, 1 had withdrawn, and 2 were unavailable for testing. Among the 32 participants who completed 12 months of follow-up, the PRIMA system led to a clinically meaningful improvement in visual acuity from baseline in 26 (81%; 95% confidence interval, 64 to 93; P<0.001). Using multiple imputation to account for the 6 participants with missing data, we estimated that 80% (95% CI, 66 to 94; P<0.001) of all participants would have had a clinically meaningful improvement at 12 months. A total of 26 serious adverse events occurred in 19 participants. Twenty-one of these events (81%) occurred within 2 months after surgery, of which 20 (95%) resolved within 2 months after onset. The mean natural peripheral visual acuity after implantation was equivalent to that at baseline.
Conclusions
In this study involving 38 participants with geographic atrophy due to AMD, the PRIMA system restored central vision and led to a significant improvement in visual acuity from baseline to month 12. (Funded by Science Corporation and the Moorfields National Institute for Health and Care Research Biomedical Research Centre; PRIMAvera ClinicalTrials.gov number, NCT04676854.)
Amazon Web Services (AWS) is currently experiencing a major outage that has taken down online services, including Amazon, Alexa, Snapchat, Fortnite, and more. The AWS status checker is reporting that multiple services are “impacted” by operational issues, and that the company is “investigating increased error rates and latencies for multiple AWS services in the US-EAST-1 Region” — though outages are also impacting services in other regions globally.
Users on Reddit are reporting that the Alexa smart assistant is down and unable to respond to queries or complete requests, and in my own experience, I found that routines like pre-set alarms are not functioning. The AWS issue also appears to be impacting platforms running on its cloud network, including Perplexity, Airtable, Canva, and the McDonalds app. The cause of the outage hasn’t been confirmed, and it’s unclear when regular service will be restored.
“Perplexity is down right now,” Perplexity CEO Aravind Srinivas said on X. “The root cause is an AWS issue. We’re working on resolving it.”
The AWS dashboard first reported issues affecting the US-EAST-1 Region at 3:11AM ET. “We are actively engaged and working to both mitigate the issue and understand root cause. We will provide an update in 45 minutes, or sooner if we have additional information to share,” Amazon said in an update published at 3:51AM ET.
The service provides cloud-computing and API services to major websites, popular apps, and platforms across the world. It means that users have been experiencing issues across a huge swath of the internet as the UK starts its working week.
[…]
We will be keeping an updating list of websites, apps, games, and more than are impacted. It includes:
Windows Recovery Environment (RE), as the name suggests, is a built-in set of tools inside Windows that allow you to troubleshoot your computer, including booting into the BIOS, or starting the computer in safe mode. It’s a crucial piece of software that has now, unfortunately, been rendered useless (for many) as part of the latest Windows update. A new bug discovered in Windows 11’s October build, KB5066835, makes it so that your USB keyboard and mouse stop working entirely, so you cannot interact with the recovery UI at all.
This problem has already been recognized and highlighted by Microsoft, who clarified that a fix is on its way to address this issue. Any plugged-in peripherals will continue to work just fine inside the actual operating system, but as soon as you go into Windows RE, your USB keyboard and mouse will become unresponsive. It’s important to note that if your PC fails to start-up for any reason, it defaults to the recovery environment to, you know, recover and diagnose any issues that might’ve been preventing it from booting normally.
Note that those hanging onto old PS/2-connector equipped keyboards and mice seem to be unaffected by this latest Windows software gaffe.
If you twist something — say, spin a top or rotate a robot’s arm — and want it to return to its exact starting point, intuition says you’d need to undo every twist one by one. But mathematicians Jean-Pierre Eckmann from the University of Geneva and Tsvi Tlusty from the Ulsan National Institute of Science and Technology (UNIST) have found a surprising shortcut. As they describe in a new study, nearly any sequence of rotations can be perfectly undone by scaling its size and repeating it twice.
Like a mathematical Ctrl+Z, this trick sends nearly any rotating object back to where it began.
“It is actually a property of almost any object that rotates, like a spin or a qubit or a gyroscope or a robotic arm,” Tlusty told New Scientist. “If [objects] go through a highly convoluted path in space, just by scaling all the rotation angles by the same factor and repeating this complicated trajectory twice, they just return to the origin.”
A Hidden Symmetry of Motion
A random walk on SO(3) shown as a trajectory in a ball of radius π, where a rotation R(n,ω) is mapped to the point r=nω and antipodal points are identified, nπ = −nπ (the real projective space RP3). The walk traverses from the center (small red sphere) to the blue end. Crossing antipodal points is indicated by dotted lines. Credit: Physical Review Letters.
Mathematicians represent rotations using a space called SO(3) — a three-dimensional map where every point corresponds to a unique orientation. At the very center lies the identity rotation: the object’s original state. Normally, retracing a complex path through this space wouldn’t bring you back to that center. But Eckmann and Tlusty found that scaling all rotation angles by a single factor before repeating the motion twice acts like a geometric reset.
So for example:
If your first rotation sequence tilted the object 75 degrees this way, 20 degrees that way, and so on, you could shrink all those angles by, say, a factor of 0.3, and then run that shortened version two times in a row.
After those two runs, the object returns perfectly to its starting position — as if nothing had ever happened.
In their proof, the researchers blended a 19th-century tool for combining rotations (Rodrigues’ rotation formula) with Hermann Minkowski’s theorem from number theory. Together, these revealed that “almost every walk in SO(3) or SU(2), even a very complicated one, will preferentially return to the origin simply by traversing the walk twice in a row and uniformly scaling all rotation angles.”
Why This Matters
Why should you care, though? Well, rotations are everywhere: in gyroscopes, MRI machines, and quantum computers. Any technique that can reliably “reset” them could have broad uses. In magnetic resonance imaging (MRI), for example, atomic nuclei constantly spin in magnetic fields. Small errors in those spins can blur the resulting images. The new insight could help engineers design sequences that cleanly undo unwanted rotations.
Quantum devices, built around spinning qubits, might also benefit. Since qubits evolve through quantum rotations described by SU(2), a universal reset rule could help stabilize computations. “No matter how tangled the history of rotations,” Tlusty said in the UNIST press release, “there exists a simple recipe: rescale the driving force and apply it twice.”
And in robotics, the principle might enable machines that can roll or pivot endlessly without drifting off course. “Imagine if we had a robot that could morph between any solid body shape, it could then follow any desired path simply through morphing of shape,” said Josie Hughes of the Swiss Federal Institute of Technology Lausanne in an interview with New Scientist.
As Eckmann put it, the discovery shows “how rich mathematics can be even in a field as well-trod as the study of rotations.” It’s a rare kind of elegance: a universal law that hides in plain sight, waiting for someone to give the world a gentle twist — and then do it again.
Microsoft’s October Windows 11 update has managed the impressive feat of breaking localhost, leaving developers unable to access web applications running on their own machines.
The problem first surfaced on Microsoft’s own support forums and quickly spread to Stack Overflow and Server Fault after the October 2025 cumulative update (KB5066835) landed, which appears to have severed Windows’ ability to talk to itself.
Developers describe HTTP/2 protocol errors and failed connections affecting everything from ASP.NET builds to Visual Studio debugging sessions.
Security nightmare stories needed!
Boss get the company hacked because he taped passwords to his monitor? Coworker get phished by a Nigerian prince?
Share the dirty details and they might appear in a future edition of PWNED, our new weekly feature about the worst security breaches that never should have happened.
Drop us a line at pwned@sitpub.com. Your anonymity is guaranteed.
The bug, introduced in build 26100.6899, has been traced to HTTP.sys, the Windows kernel component that handles local HTTP traffic. Developers have found that uninstalling KB5066835, and in some cases its sibling KB5065789, restores localhost functionality.
Others have discovered a temporary workaround that involves manually disabling HTTP/2 in the registry, which works but feels a bit like using a sledgehammer to swat a fly.
At the time of writing, Microsoft had yet to acknowledge the issue. Users report mixed results when trying to reinstall the patch or roll forward to newer builds. The problem appears to vanish on clean installs of Windows 11 24H2, suggesting that the error stems from a conflict in how the update interacts with existing system configurations, rather than being a universal bug.
In the meantime, moderators on Stack Overflow have already locked multiple posts and Server Fault threads are filled with frustrated devs trying to get their local servers running again.
All this comes as Microsoft pushed its final update for Windows 10 this week, officially ending support for the decade-old OS and urging users to move to Windows 11.
The transition hasn’t exactly been buttery smooth. Microsoft’s Windows 11 media creation tool also stopped working the day before, potentially affecting users trying to upgrade, and the same patch cycle saw end-of-support deadlines for Office 2019 and multiple server products.
All this means that, within the same week, Microsoft’s installer broke, its new OS borked local development, and Redmond’s multimillion-dollar upgrade push instead highlighted how fragile its ecosystem still is.
It’s almost enough to make you nostalgic for Clippy. We said almost. ®
Updated at 9.54 UTC on October 17, 2025, to add:
More than twenty four hours after asking Microsoft to comment, a spokesperson for the company sent a statement confirming problems.
“We are actively working on mitigations and recommend customers follow our guidance available here.”
Alan’s Factory Outlet surveyed 1,000 American drivers to explore a messy but relatable problem: bird droppings on cars. By combining survey responses with research on bird behavior and parking habits, this report uncovered which vehicles are hit the hardest, which colors attract the most mess, and how much money drivers spend cleaning up. The findings reveal not only surprising insights but also the importance of having protection like carports and garages.
Key Takeaways
Ram, Jeep, and Chevrolet are the top three vehicles most frequently targeted by bird droppings.
Brown, red, and black cars attract the most bird poop, according to drivers.
Over 1 in 2 Americans (58%) say their car has been pooped on more than once in the same day.
29% of Americans feel like birds have “targeted” their vehicle.
Nearly 1 in 4 Americans (24%) spend over $500 each year on car washes and repairs due to bird droppings.
1 in 5 Americans (21%) would invest in a car cover or garage to avoid bird mess, and they’d pay an average of $50/month for better protection.
Car Brands and Colors Birds Target Most
Car owners often debate whether certain makes or colors are more vulnerable to bird mess, and the data from our survey suggests they may be right.
Ram, Jeep, and Chevrolet topped the list of vehicles most likely to be splattered. Other frequently targeted brands included Nissan, Dodge, and Kia, while Tesla, Audi, and Subaru also made the top ten. This spread shows that both domestic and imported brands are at risk. Color also played a noticeable role. Brown, red, and black cars drew the most unwanted attention from above, while lighter colors like white and silver/gray ranked lower.
For many drivers, bird droppings are a regular headache. Over half of Americans (58%) said their car had been pooped on more than once in the same day, and nearly a third (29%) felt like birds had personally “targeted” them. Lexus (47%), Tesla (39%), and Dodge (35%) drivers felt the most targeted by birds.
More than 1 in 10 drivers (11%) even reported paint damage caused by droppings. These experiences often lead to frequent car washes. Over half of drivers (57%) have paid for a car wash specifically to clean off bird droppings, and 39% said they have to wash their cars multiple times a month because of it.
The costs add up quickly. Nearly 1 in 4 drivers (24%) spent more than $500 annually on car washes and repairs related to bird mess. Tesla and BMW owners were among the most impacted, with two-thirds of each brand spending over $500 per year.
Parking Habits and Prevention Attempts
Parking choices made a big difference in how often cars were hit.
Nearly one-third of Americans (29%) had changed their usual parking spot to steer clear of bird droppings, while 55% admitted their current setup provided little to no protection. Many went out of their way for a cleaner car: 38% said they would walk up to a block just to avoid parking under “poop zones.” Drivers of Toyota (17%), Honda (15%), and Chevrolet (7%) vehicles were the most likely to make these adjustments.
Bird droppings even disrupted daily life for some. More than 1 in 20 Americans (6%) had canceled or delayed plans because their car was too dirty, and over 1 in 10 (14%) had gotten droppings on themselves while getting in or out of their vehicle.
To prevent the mess, about 1 in 5 Americans (21%) said they would invest in a car cover or garage addition, with many willing to spend around $50 per month for added protection. Covered options such as carports also offered a practical solution for drivers looking to avoid these costly and frustrating cleanups.
[…] a technique called EtherHiding, hiding malware inside blockchain smart contracts to sneak past detection and ultimately swipe victims’ crypto and credentials, according to Google’s Threat Intelligence team.
A Pyongyang goon squad that GTIG tracks as UNC5342 has been using this method since February in its Contagious Interview campaign, we’re told.
The criminals pose as recruiters, posting fake profiles on social media along the lines of Lazarus Group’s Operation Dream Job, which tricked job seekers into clicking on malicious links. But in this case, the Norks target software developers, especially those working in cryptocurrency and tech, trick them into downloading malware disguised as a coding test, and ultimately steal sensitive information and cryptocurrency, while gaining long-term access to corporate networks.
Hiding on the blockchain
To do this, they use EtherHiding, which involves embedding malicious code into a smart contract on a public blockchain, turning the blockchain into a decentralized and stealthy command-and-control server.
Because it’s decentralized, there isn’t a central server for law enforcement to take down, and the blockchain makes it difficult to trace the identity of whoever deployed the smart contract. This also allows attackers to retrieve malicious payloads using read-only calls with no visible transaction history on the blockchain.
“In essence, EtherHiding represents a shift toward next-generation bulletproof hosting, where the inherent features of blockchain technology are repurposed for malicious ends,” Google’s threat hunters Blas Kojusner, Robert Wallace, and Joseph Dobson said in a Thursday report.
[…]
“EtherHiding presents new challenges as traditional campaigns have usually been halted by blocking known domains and IPs,” the security researchers wrote. “Malware authors may leverage the blockchain to perform further malware propagation stages since smart contracts operate autonomously and cannot be shut down.”
The good news: there are steps administrators can take to prevent EtherHiding attacks, with the first – and most direct – being to block malicious downloads. This typically involves setting policy to block certain types of files including .exe, .msi, .bat, and .dll.
Admins can also set policy to block access to known malicious websites and URLs of blockchain nodes, and enforce safe browsing via policies that use real-time threat intelligence to warn users of phishing sites and suspicious downloads.
SpaceX may be guilty of violating regulatory standards by using a classified network of satellites to transmit data to Earth on radio frequencies reserved for uplinking signals, according to a citizen scientist who tracks satellites in Earth orbit.
Scott Tilley, an amateur satellite tracker in Canada, accidentally detected space-to-Earth emissions on a radio frequency band reserved for transmitting data from Earth to space, NPR first reported. The signals were traced to SpaceX’s Starshield, an encrypted version of the Starlink satellites used for national security efforts.
Using an unauthorized frequency to downlink data to Earth violates radio regulations set by the International Telecommunications Union (ITU) and could potentially interfere with other satellites’ ability to receive signals from Earth, according to a report by Tilley.
[…]
Although there’s little information shared about Starshield, Tilley was able to detect signals from 170 satellites in the 2025 to 2110 MHz range. This specific band of the radio spectrum is reserved for uplinking data from Earth to orbiting satellites and therefore should not have any signals going the other way round.
“Nearby satellites could receive radio-frequency interference and could perhaps not respond properly to commands—or ignore commands—from Earth,”
[…]
Because the ITU doesn’t impose fines for regulatory violations, SpaceX will likely face no consequences for using an unauthorized frequency band or for potentially interfering with other satellite signals. The company is known for pushing regulatory boundaries to further its position as a leader in the industry.
Amazon’s surveillance camera maker Ring announced a partnership on Thursday with Flock, a maker of AI-powered surveillance cameras that share footage with law enforcement.
Now agencies that use Flock can request that Ring doorbell users share footage to help with “evidence collection and investigative work.”
Flock cameras work by scanning the license plates and other identifying information about cars they see. Flock’s government and police customers can also make natural language searches of their video footage to find people who match specific descriptions. However, AI-powered technology used by law enforcement has been proven to exacerbateracial biases.
On the same day that Ring announced this partnership, 404 Media reported that ICE, the Secret Service, and the Navy had access to Flock’s network of cameras. By partnering with Ring, Flock could potentially access footage from millions more cameras.
Ring has long had a poor track record with keeping customers’ videos safe and secure. In 2023, the FTC ordered the company to pay $5.8 million over claims that employees and contractors had unrestricted access to customers’ videos for years.
For more on Flock cameras and how unsecured and dangerous these things are (and also how to join a network of people monitoring this pervasive surveillance) click here.
Hackers stole the personal information of over 17.6 million people after breaching the systems of financial services company Prosper.
Prosper operates as a peer-to-peer lending marketplace that has helped over 2 million customers secure more than $30 billion in loans since its founding in 2005.
As the company disclosed one month ago on a dedicated page, the breach was detected on September 2, but Prosper has yet to find evidence that the attackers gained access to customer accounts and funds.
However, the attackers stole data belonging to Prosper customers and loan applicants. The company hasn’t shared what information was exposed beyond Social Security numbers because it’s still investigating what data was affected.
[…]
“We have evidence that confidential, proprietary, and personal information, including Social Security Numbers, was obtained, including through unauthorized queries made on Company databases that store customer information and applicant data.
[…]
While Prosper didn’t share how many customers were affected by this data breach, data breach notification service Have I Been Pwned revealed the extent of the incident on Thursday, reporting that it affected 17.6 million unique email addresses.
The stolen information also includes customers’ names, government-issued IDs, employment status, credit status, income levels, dates of birth, physical addresses, IP addresses, and browser user agent details.
Also no mention of how easy it was to perform these “unauthorised queries” on the database, or why the difference between 2m customers and 17.6m records.
Part of the magic in the hugely popular Grand Theft Auto (GTA) video games is how well they pack pop-culture parodies into their virtual worlds. Like, between normal songs, the in-game radio stations have talk shows and ads that sound like they could be real until you pay attention. A gaming and tech enthusiast in Germany has taken that meta aspect to another level, building a Raspberry Pi-based device that lets him use the in-game radio in his car in real life.
This little 12-volt-socket-powered dongle has a surprisingly polished appearance with a tiny display for each game radio station and a handy knob to cycle between them. The audio and icon files are stored within the device.
@ZeugUndKram/YouTube
A Raspberry Pi is just a tiny computer with no screen, body, or peripherals. Tech hobbyists like them because they’re small and inexpensive, but powerful enough to do computer processing.
The GTA radio stations have themes just like real ones—there’s a pop channel, a country channel, an angry-screaming-pundit channel, and many more. But the DJ interludes and commercials are the funny part—they mostly sound like normal radio chatter, then veer into wacky/raunchy/unsubtle culture-mocking.
As for listening to the game radio stations in a real car, the cheapest and fastest way to do it would probably be to simply cue up a YouTube video about the game station you want to hear (there are a bunch on YT) and beam it to your car through Bluetooth like Spotify or Netflix or whatever app you normally listen to.
@ZeugUndKram/YouTube
However, the custom-made solution we found today is far cooler. As outlined on the YouTube channel Zeug und Kram (which means “stuff and junk” in German), the setup here is essentially a 12-volt charger and Bluetooth radio transmitter mated to a Raspberry Pi with a tiny circular screen on top, all neatly integrated together in a rather elegant 3D-printed housing.
The video we’ll embed below explains how it came together. It’s also outlined on Instructables if you want to try and replicate the project yourself. Objectively speaking, it’s not particularly useful per se, but it’s a great execution of a creative idea.
If you don’t speak German, YouTube does a good job of auto-translating with closed captions (hit the gear button to find that menu).
Microsoft’s Threat Intelligence team has sounded the alarm over a new financially-motivated cybercrime spree that is raiding US university payroll systems.
In a blog post, Redmond said a cybercrime crew it tracks as Storm-2657 has been targeting university employees since March 2025, hijacking salaries by breaking into HR software such as Workday.
The attack is as audacious as it is simple: compromise HR and email accounts, quietly change payroll settings, and redirect pay packets into attacker-controlled bank accounts. Microsoft has dubbed the operation “payroll pirate,” a nod to the way crooks plunder staff wages without touching the employer’s systems directly.
Storm-2657’s campaign begins with phishing emails designed to harvest multifactor authentication (MFA) codes using adversary-in-the-middle (AiTM) techniques. Once in, the attackers breach Exchange Online accounts and insert inbox rules to hide or delete HR messages. From there, they use stolen credentials and SSO integrations to access Workday and tweak direct deposit information, ensuring that future payments go straight to them.
Microsoft stresses that the attacks don’t exploit a flaw in Workday itself. The weak points are poor MFA hygiene and sloppy configurations, with Redmond warning that organizations still relying on legacy or easily-phished MFA are sitting ducks.
“Since March 2025, we’ve observed 11 successfully compromised accounts at three universities that were used to send phishing emails to nearly 6,000 email accounts across 25 universities,” Microsoft explained. It says these lures were crafted with academic precision: fake HR updates, reports of faculty misconduct, or notes about illness clusters, often linked through shared Google Docs to bypass filtering and appear routine.
In one instance, a phishing message urging recipients to “check their illness exposure status” was sent to 500 people within a single university, and only about 10 percent flagged it as suspicious, according to Microsoft.
An Austrian digital privacy group has claimed victory over Microsoft after the country’s data protection regulator ruled the software giant “illegally” tracked students via its 365 Education platform and used their data.
noyb said the ruling [PDF] by the Austrian Data Protection Authority also confirmed that Microsoft had tried to shift responsibility for access requests to local schools, and the software and cloud giant would have to explain how it used user data.
The ruling could have far-reaching effects for Microsoft and its obligations to inform Microsoft 365 users across Europe about what it is doing with their data, noyb argues.
The complaint dates back to the COVID-19 pandemic, when schools rapidly shifted to online learning, using the likes of 365 Education.
The privacy group said: “Microsoft shifted all responsibility to comply with privacy laws onto schools and national authorities – that have little to no actual control over the use of student data.”
When the complainant filed an access request to see what information was being processed, “this led to massive finger pointing: Microsoft simply referred the complainant to its local school.”
But the school and education authorities could only provide minimal information. The school, for example, could not access information that rested with Microsoft. “No one felt able to comply with GDPR rights.”
This prompted a complaint against the school, national and local education authorities, and Microsoft.
The ruling, machine translated, said: “It is determined that Microsoft, as a controller, violated the complainant’s right of access (Art. 15 GDPR) by failing to provide complete information about the data processed when using Microsoft Education 365.”
Microsoft was ordered to provide complete information about the data transmitted, and to provide clear explanations of terms such as “internal reporting,” “business modelling” and “improvement of core functionality.” It must also disclose if information was transferred to third parties.
Climate change has pushed warm-water coral reefs past a point of no return, marking the first time a major climate tipping point has been crossed, according to a report released on Sunday by an international team in advance of the United Nations Climate Change Conference COP30 in Brazil this November.
Tipping points include global ice loss, Amazon rainforest loss, and the possible collapse of vital ocean currents. Once crossed, they will trigger self-perpetuating and irreversible changes that will lead to new and unpredictable climate conditions. But the new report also emphasizes progress on positive tipping points, such as the rapid rollout of green technologies.
[…]
The world is entering a “new reality” as global temperatures will inevitably overshoot the goal of staying within 1.5°C of pre-industrial averages set by the Paris Climate Agreement in 2015, warns the Global Tipping Points Report 2025, the second iteration of a collaboration focused on key thresholds in Earth’s climate system.
[…]
“The marine heat wave hit 80 percent of the world’s warm-water coral reefs with the worst bleaching event on record,” said Smith. “Their response confirms that we can no longer talk about tipping points as a future risk. The widespread dieback of warm-water coral reefs is already underway, and it’s impacting hundreds of millions of people who depend on the reef for fishing, for tourism, for coastal protection, and from rising seas and storm surges.”
The report singled out Caribbean corals as a useful case study given that these ecosystems face a host of pressures, including extreme weather, overfishing, and inadequate sewage and pollution management. These coral diebacks are a disaster not only for the biodiverse inhabitants of the reefs, but also for the many communities who depend on them for food, income, coastal protection, and as a part of cultural identity.
Vodafone fell over in the UK this afternoon, with Register readers reporting that many services including mobile coverage, internet services, and even the company’s own status page went down.
The outage began on Monday at 14.25 BST, and 30 minutes later it peaked when monitoring website Downdetector.co.uk reported that almost 140,000 customers were unable to use the service. One Register reader, Steve Maxted, noted that “Vodafone is down. Hard! Everything. Landline internet, mobile internet, website… It’s not just DNS, as ping also fails.”
Ah, yes, that old standby – it isn’t DNS – it can’t be DNS – until it is. However, something more serious appears to have affected the telco. The Register contacted Vodafone for more details, but the company has yet to respond.
Another reader told us: “One of our multi-network roaming SIM providers just warned us that ‘we are currently aware of an ongoing issue with the Vodafone UK Network. This seems to be affecting a large number of consumer devices across the country.'”
Our reader’s phone registered a strong signal, but data appeared to be broken, and while an inbound call worked, “trying an outbound call caused my Pixel 7 to lock up completely and do a very slow reboot – first time I’ve seen that.”
Less than ideal. Readers also reported that broadband was affected by the outage, which is odd since we would have expected cellular and internet connectivity to be largely separate. Hopefully, there are no single points of failure lurking within Vodafone UK’s infrastructure.
Vodafone and Three recently announced a deal whereby customers of one could use the other’s network. At the time of writing, Three does not appear to have any issues, so it would have been a good time for a network switcheroo. However, as one reader observed, the problems did not seem to be with the signal strength but rather with something else within the system.
A spokesperson at Vodafone told us:
“This afternoon, for a short time, the Vodafone network had an issue affecting broadband, 4G and 5G services. 2G voice calls and SMS messaging were unaffected and the network is now recovering. We apologise for any inconvenience this caused our customers.”