[…] In a recent Science Advances paper, researchers report the creation of the smallest pixel ever, using optical antennas that convert radiation into focused energy bits. The pixel measures just 300 by 300 nanometers—around 17 times smaller than a conventional OLED pixel, but with a similar brightness.
To put the size into context, a display with an area of just one square millimeter could fit a resolution of 1920 x 1080 pixels using the new technology. The tiny pixel also glows on its own, making it potentially revolutionary for the next generation of smart, portable devices.
[…]
the team identified a way to effectively block these unwanted structures, called filaments, from potentially destroying the pixel. Specifically, they fabricated a thin, insulating layer with a tiny circular opening at its center and layered it over a gold optical antenna.
The arrangement proved surprisingly effective in preventing filaments from forming. The optical antenna additionally helped focus electromagnetic energy and amplify the brightness, according to the paper. As a result, “even the first nanopixels were stable for two weeks under ambient conditions,” said Bert Hecht, study senior author and a physicist at the University of Würzburg, in the release.
That said, the system is still a prototype, with about 1% efficiency. However, the researchers noted that because the current paper eliminates one of the biggest challenges of scaling down pixels, the next steps should be slightly easier.
“With this technology, displays and projectors could become so small in the future that they can be integrated almost invisibly into devices worn on the body—from eyeglass frames to contact lenses,” the researchers added.
[…]Humanity has failed to limit global heating to 1.5C and must change course immediately, the secretary general of the UN has warned.
In his only interview before next month’s Cop30 climate summit, António Guterres acknowledged it is now “inevitable” that humanity will overshoot the target in the Paris climate agreement, with “devastating consequences” for the world.
He urged the leaders who will gather in the Brazilian rainforest city of Belém to realise that the longer they delay cutting emissions, the greater the danger of passing catastrophic “tipping points” in the Amazon, the Arctic and the oceans.
“Let’s recognise our failure,” he told the Guardian and Amazon-based news organisation Sumaúma. “The truth is that we have failed to avoid an overshooting above 1.5C in the next few years. And that going above 1.5C has devastating consequences. Some of these devastating consequences are tipping points, be it in the Amazon, be it in Greenland, or western Antarctica or the coral reefs.
He said the priority at Cop30 was to shift direction: “It is absolutely indispensable to change course in order to make sure that the overshoot is as short as possible and as low in intensity as possible to avoid tipping points like the Amazon. We don’t want to see the Amazon as a savannah. But that is a real risk if we don’t change course and if we don’t make a dramatic decrease of emissions as soon as possible.”
The planet’s past 10 years have been the hottest in recorded history. Despite growing scientific alarm at the speed of global temperature increases caused by the burning of fossil fuels – oil, coal and gas – the secretary general said government commitments have come up short.
Fewer than a third of the world’s nations (62 out of 197) have sent in their climate action plans, known as nationally determined contributions (NDCs) under the Paris agreement. The US under Donald Trump has abandoned the process. Europe has promised but so far failed to deliver. China, the world’s biggest emitter, has been accused of undercommitting.
António Guterres giving his speech at Cop29 in Baku, Azerbaijan, in November 2024. Photograph: Anatoly Maltsev/EPA
Guterres said the lack of NDC ambition means the Paris goal of 1.5C will be breached, at least temporarily: “From those [NDCs] received until now, there is an expectation of a reduction of emissions of 10%. We would need 60% [to stay within 1.5C]. So overshooting is now inevitable.”
He did not give up on the target though, and said it may still be possible to temporarily overshoot and then bring temperatures down in time to return to 1.5C by the end of the century, but this would require a change of direction at and beyond Cop30.
China’s new influencer law, which took effect on October 25, requires anyone creating content on sensitive topics, such as medicine, law, education, or finance, to hold official qualifications in those fields.
The Cyberspace Administration of China (CAC) says the goal is to fight misinformation and protect the public from false or harmful advice. But, the move has also raisedconcerns about censorship and freedom of expression.
Under the new rules, influencers who talk about regulated topics must show proof of their expertise, such as a degree, professional license, or certificate. Platforms like Douyin (China’s version of TikTok), Bilibili, and Weibo must verify creators’ credentials and make sure their content includes proper citations and disclaimers.
For example, influencersmust clearly state when information comes from studies or when a video includes AI-generated material. Platforms are also required to educate users about their responsibilities when sharing content online.
The CAC has gone even further by banning advertising for medical products, supplements, and health foods to prevent hidden promotions disguised as “educational” videos.
However, critics warn that the law could harm creativity and limit freedom of speech. By controlling who can talk about certain topics, they argue, China might not only block misinformation but also restrict independent voices and critical debate.
Many worry that “expertise” will be defined too narrowly, giving authorities more power to silence people who question official narratives or offer alternative views.
Others, however, welcomed the move, saying that the new law would allow for well-informed content on important and sensitive topics. Many argued that only professionals in their field should be able to speak about and discuss said topic to prevent misinformation.
The rise of influencer culture has changed how people get information. Influencers are valued for being relatable and authentic, and being able to connect with audiences in ways traditional experts cannot. However, when these creators share misleading or inaccurate information, the effects can be serious, supporters of the new law argue.
Unfortunately, having people doing “research” by watching one Youtube video and then telling people that vaccines don’t work, or that 5G space bats cause covid and people want to inject chips into you has proven to be an absolute disaster, which prolonged a global pandemic and killed a lot of people.
These people should be jailed and it is a crying shame that a country like China is taking the lead in this, and not the EU.
[…]The programming non-profit’s deputy executive director Loren Crary said in a blog post today that the National Science Foundation (NSF) had offered $1.5 million to address structural vulnerabilities in Python and the Python Package Index (PyPI), but the Foundation quickly became dispirited with the terms of the grant it would have to follow.
“These terms included affirming the statement that we ‘do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI [diversity, equity, and inclusion], or discriminatory equity ideology in violation of Federal anti-discrimination laws,'” Crary noted. “This restriction would apply not only to the security work directly funded by the grant, but to any and all activity of the PSF as a whole.”
To make matters worse, the terms included a provision that if the PSF was found to have violated that anti-DEI diktat, the NSF reserved the right to claw back any previously disbursed funds, Crary explained.
“This would create a situation where money we’d already spent could be taken back, which would be an enormous, open-ended financial risk,” the PSF director added.
The PSF’s mission statement enshrines a commitment to supporting and growing “a diverse and international community of Python programmers,” and the Foundation ultimately decided it wasn’t willing to compromise on that position, even for what would have been a solid financial boost for the organization.
“The PSF is a relatively small organization, operating with an annual budget of around $5 million per year, with a staff of just 14,” Crary added, noting that the $1.5 million would have been the largest grant the Foundation had ever received – but it wasn’t worth it if the conditions were undermining the PSF’s mission.
The PSF board voted unanimously to withdraw its grant application.
The non-profit would’ve used the funding to help prevent supply chain attacks; create a new automated, proactive review process for new PyPI packages; and make the project’s work easily transferable to other open-source package managers. […]
[…] Software provider AppZen said fake AI receipts accounted for about 14 per cent of fraudulent documents submitted in September, compared with none last year. Fintech group Ramp said its new software flagged more than $1mn in fraudulent invoices within 90 days.
About 30 per cent of US and UK financial professionals surveyed by expense management platform Medius reported they had seen a rise in falsified receipts following the launch of OpenAI’s GPT-4o last year.
“These receipts have become so good, we tell our customers, ‘do not trust your eyes’,” said Chris Juneau, senior vice-president and head of product marketing for SAP Concur, one of the world’s leading expense platforms, which processes more than 80mn compliance checks monthly using AI.
Several platforms attributed a significant jump in the number of AI-generated receipts after OpenAI launched GPT-4o’s improved image generation model in March.
[…]Qualcomm said that both the AI200, which will go on sale in 2026, and the AI250, planned for 2027, can come in a system that fills up a full, liquid-cooled server rack.
, which offer their graphics processing units, or GPUs, in full-rack systems that allow as many as 72 chips to act as one computer. AI labs need that computing power to run the most advanced models.
Qualcomm’s data center chips are based on the AI parts in Qualcomm’s smartphone chips called Hexagon neural processing units, or NPUs.
[…]
Qualcomm said its chips are focusing on inference, or running AI models, instead of training, which is how labs such as OpenAI create new AI capabilities by processing terabytes of data.
The chipmaker said that its rack-scale systems would ultimately cost less to operate for customers such as cloud service providers, and that a rack uses 160 kilowatts, which is comparable to the high power draw from some Nvidia GPU racks.
Malladi said Qualcomm would also sell its AI chips and other parts separately, especially for clients such as hyperscalers that prefer to design their own racks.
[…]
The company declined to comment, the price of the chips, cards or rack, and how many NPUs could be installed in a single rack.
[…]
Qualcomm said its AI chips have advantages over other accelerators in terms of power consumption, cost of ownership, and a new approach to the way memory is handled. It said its AI cards support 768 gigabytes of memory, which is higher than offerings from Nvidia and AMD.
Paxos Trust Company, the blockchain infrastructure partner for PayPal’s stablecoin, has publicly admitted to a catastrophic “technical error” that led to the accidental creation of $300 trillion worth of PayPal USD (PYUSD) tokens. The mistake, which was identified and rectified within minutes, temporarily created a theoretical sum exceeding the entire global money supply. This incident immediately triggers intense scrutiny from financial regulators, including the New York Department of Financial Services (NYDFS), and casts a shadow over the operational integrityof the burgeoning stablecoin market. For PayPal, the error represents a significant reputational blow, challenging the perception of its carefully managed entry into digital assets.
This failure represents a stark vulnerability in the automated systems underpinning digital assets. While blockchain technology promises immutable and transparent transactions, Paxos is now confronting the reality that its risk managementprotocols were insufficient to prevent a near-infinite minting event. The company’s promise to be “much better than this” highlights the critical gap between theoretical blockchain security and the practical operational controlsrequired for regulated financial services. This matters because it demonstrates that for institutional adoption to proceed, the infrastructure must be as foolproof as the legacy financial systems it seeks to augment or replace, not a source of existential, self-inflicted risk.
For fintech executives and digital asset custodians, this is a critical warning. The forward-looking insight is clear: the path to mainstream stablecoin adoptionwill be paved with relentless focus on operational controls and third-party audits. This event will force a sector-wide review of minting and burning mechanisms, likely leading to more conservative, multi-signature requirements and real-time monitoring mandates from regulators. The most trusted players will be those who can transparently demonstrate ironclad technical and procedural safeguards, turning this public failure into an industry-wide mandate for bulletproof operational excellence.
Part of what is not mentioned, is that they revoked the value very quickly. The minting is one thing, but how trustworthy can any value store be when it the value can be revoked one-sidedly at any time by the press of a button?
A major Microsoft outage has caused services like 365 and Azure cloud platform to go dark hours before the company was set to report its quarterly earnings, CNBC reports.
According to Downdetector, which monitors internet outages, tens of thousands of disruptions appear to have spiked just before noon on Wednesday, Oct. 29. The website, server connection and domain are the most impacted. Xbox and even Minecraft are also affected by the outage, according to The Verge.
“We began experiencing Azure Front Door issues resulting in a loss of availability of some services. In addition, customers may experience issues accessing the Azure Portal,” Microsoft notes on its service’s status page. “Our investigation into the contributing factors and additional recovery workstreams continues.”
HSL’s website, application and Journey Planner do not work. The application informs that the service cannot be contacted. The website and the Journey Planner do not open at all.
Due to the disruption, for example, tickets cannot currently be purchased through the app.
Vilma Aho from HSL’s communications says that the problems are related to the Microsoft Azure cloud service’s wider telecommunications problem. According to Aho, a more specific reason is currently being investigated.
According to Aho, the disruption in the services started around six o’clock on Wednesday evening. He could not estimate at about half past seven how long it would take to fix the problem.
Amazon has published a detailed postmortem explaining how a critical fault in DynamoDB’s DNS management system cascaded into a day-long outage that disrupted major websites and services across multiple brands – with damage estimates potentially reaching hundreds of billions of dollars.
The incident began at 11:48 PM PDT on October 19 (7.48 UTC on October 20), when customers reported increased DynamoDB API error rates in the Northern Virginia US-EAST-1 Region. The root cause was a race condition in DynamoDB’s automated DNS management system that left an empty DNS record for the service’s regional endpoint.
The DNS management system comprises two independent components (for availability reasons): a DNS Planner that monitors load balancer health and creates DNS plans, and a DNS Enactor that applies changes via Amazon Route 53.
Amazon’s postmortem says the error rate was triggered by “a latent defect” within the service’s automated DNS management system.
The race condition occurred when one DNS Enactor experienced “unusually high delays” while the DNS Planner continued generating new plans. A second DNS Enactor began applying the newer plans and executed a clean-up process just as the first Enactor completed its delayed run. This clean-up deleted the older plan as stale, immediately removing all IP addresses for the regional endpoint and leaving the system in an inconsistent state that prevented further automated updates applied by any DNS Enactors.
Before manual intervention, systems connecting to DynamoDB experienced DNS failures, including customer traffic and internal AWS services. This impacted EC2 instance launches and network configuration, the postmortem says.
The DropletWorkflow Manager (DWFM), which maintains leases for physical servers hosting EC2 instances, depends on DynamoDB. When DNS failures caused DWFM state checks to fail, droplets – the EC2 servers – couldn’t establish new leases for instance state changes.
After DynamoDB recovered at 2.25 AM PDT (9:25 AM UTC), DWFM attempted to re-establish leases across the entire EC2 fleet. The massive scale meant the process took so long that leases began timing out before completion, causing DWFM to enter “congestive collapse” requiring manual intervention until 5:28 AM PDT (12:28 PM UTC).
Next, Network Manager began propagating a huge backlog of delayed network configurations, causing newly launched EC2 instances to experience network configuration delays.
These network propagation delays affected the Network Load Balancer (NLB) service. NLB’s health checking subsystem removed new EC2 instances that failed health checks due to network delays, only to restore them when subsequent checks succeeded.
With EC2 instance launches impaired, dependent services including Lambda, Elastic Container Service (ECS), Elastic Kubernetes Service (EKS), and Fargate all experienced issues.
AWS has disabled the DynamoDB DNS Planner and DNS Enactor automation worldwide until safeguards can be put in place to prevent the race condition reoccurring.
In its apology, Amazon stated: “As we continue to work through the details of this event across all AWS services, we will look for additional ways to avoid impact from a similar event in the future, and how to further reduce time to recovery.”
The US Customs and Border Protection (CBP) submitted a new measure that allows it to photograph any non-US citizen who enters or exits the country for facial recognition purposes. According to a filing with the government’s Federal Register, CBP and the Department of Homeland Security are looking to crack down on threats of terrorism, fraudulent use of travel documents and anyone who overstays their authorized stay.
The filing detailed that CBP will “implement an integrated, automated entry and exit data system to match records, including biographic data and biometrics, of aliens entering and departing the United States.” The government agency already has the ability to request photos and fingerprints from anyone entering the country, but this new rule change would allow for requiring photos of anyone exiting as well. These photos would “create galleries of images associated with individuals, including photos taken by border agents, and from passports or other travel documents,” according to the filing, adding that these galleries would be compared to live photos at entry and exit points.
These new requirements are scheduled to go into effect on December 26, but CBP will need some time to implement a system to handle the extra demand. According to the filing, the agency said “a biometric entry-exit system can be fully implemented at all commercial airports and sea ports for both entry and exit within the next three to five years.”
To combat mosquito-borne illnesses that claim hundreds of thousands of lives each year, scientists have enlisted an unexpected partner: a fungus that gives off a floral scent.
By exploiting mosquitoes’ attraction to flowers, an international team of researchers engineered a new strain of Metarhizium fungus that releases a sweet aroma similar to real blooms. The modified fungus draws in the insects and infects them, ultimately killing them.
The scientists were inspired by natural fungi that emit a pleasant chemical known as longifolene, which they discovered could attract mosquitoes. Building on that idea, they created a fungus that acts like a lethal perfume for the pests, offering a promising tool against malaria, dengue, and other deadly diseases that are becoming increasingly resistant to chemical pesticides. Their findings were published in Nature Microbiology on October 24, 2025.
[…]
According to St. Leger, the floral-scented fungus provides an easy and accessible method for controlling mosquito populations. The spores can simply be placed in containers indoors or outdoors, where they gradually release longifolene over several months. When mosquitoes come into contact with the fungus, they become infected and die within a few days. In laboratory tests, the fungus wiped out 90 to 100% of mosquitoes, even in environments filled with competing scents from people and real flowers.
[…]
“The fungus is completely harmless to humans as longifolene is already commonly used in perfumes and has a long safety record,” St. Leger said. “This makes it much safer than many chemical pesticides. We’ve also designed the fungus and its containers to target mosquitoes specifically rather than any other insects and longifolene breaks down naturally in the environment.”
In addition, unlike chemical alternatives that mosquitoes have gradually become resistant to, this biological approach may be nearly impossible for mosquitoes to outsmart or avoid.
“If mosquitoes evolve to avoid longifolene, that could mean they’ll stop responding to flowers,” St. Leger explained. “But they need flowers as a food source to survive,
[…]
What also makes this new fungal technology particularly promising is how practical and affordable it is to produce. Other forms of Metarhizium are already commonly cultivated around the world on cheap materials like chicken droppings, rice husks and wheat scraps that are readily available after harvest. The affordability and simplicity of the fungus could be key to reducing mosquito disease-related deaths in many parts of the world, especially in poorer countries in the global south.
[…]
St. Leger and his colleagues are now testing the fungus in larger outdoor trials to prepare it for regulatory review.
[…]
Story Source:
Materials provided by University of Maryland. Note: Content may be edited for style and length.
Journal Reference:
Dan Tang, Jiani Chen, Yubo Zhang, Xingyuan Tang, Xinmiao Wang, Chaonan Yu, Xianxian Cheng, Junwei Zhang, Wenqi Shi, Qing Zhen, Shuxing Liu, Yizhou Huang, Jiali Ning, Guoding Zhu, Meichun Zhang, Juping Hu, Etienne Bilgo, Abdoulaye Diabate, Sheng-Hua Ying, Jun Cao, Raymond J. St. Leger, Jianhua Huang, Weiguo Fang. Engineered Metarhizium fungi produce longifolene to attract and kill mosquitoes. Nature Microbiology, 2025; DOI: 10.1038/s41564-025-02155-9
Sung-Jan Lin at National Taiwan University and his colleagues became intrigued by the role of fat tissue in hair growth several years ago during an experiment on mice. “We unexpectedly discovered that, after skin irritation, the size of skin adipocytes [fat cells] quickly shrinks before hair starts to regrow,” says Lin. “We speculated that adipocytes might release fatty acids via a process called lipolysis to fuel hair regrowth.”
To understand the process better, they have repeated the experiment, taking a closer look at the cells involved. First, they induced eczema on shaved mice by applying an irritating compound to parts of their back. Within 10 days, the team observed the mice’s hair follicles were in an active growth phase, and these areas had visible hair growth. This didn’t occur on the areas without eczema or on other mice that were shaved but weren’t made to develop the skin condition.
The researchers noted that this seemed to happen due to immune cells called macrophages moving into the layer of fat beneath the mice’s skin, signalling fat cells to release fatty acids that were absorbed by hair follicle stem cells. This then made the cells produce more mitochondria, which provide them with energy, resulting in hair growth. This aligns with previous research that found plucking hair sends an immune signal to nearby hair follicles, prompting them to grow more.
Eczema isn’t typically linked to hair growth in people, but other forms of skin irritation, like having a plaster cast applied to a broken limb, have been associated with hair growing excessively.
Next, Lin and his team wanted to know whether the presence of fatty acids alone, without prior irritation, stimulates hair growth, so they created serums composed of different fatty acids dissolved in alcohol. These were applied to areas of the skin of shaved mice that didn’t have any irritation, which were later compared with other areas where the serum wasn’t applied and to other shaved mice. “We found that only monounsaturated fatty acids rich in adipose tissues, such as oleic acids and palmitoleic acids, are effective in promoting hair regeneration when topically applied to skin,” says Lin.
He says that the researchers, who have patented the serum, have also seen promising results when applying it to human hair follicles in the lab and they now plan to test different dosages of the serum on people’s scalps.
Applying the fatty acids oleic acid (C18:1) and palmitoleic acid (C16:1) led to visible hair growth over applying ethanol (EtOH) or other fatty acids
Dr. Kang-Yu Tai and Dr. Sung-Jan Lin, National Taiwan University
Lin doesn’t anticipate that the treatment will have any severe side effects. “Oleic acids and palmitoleic acids are naturally derived fatty acids. They are not only rich in our adipose tissues, but also in many plant oils, so they can be safely used,” he says. “I personally applied these fatty acids, dissolved in alcohol, on my thighs for three weeks and I found it promoted hair regrowth.”
“The key thing is that it has not yet been validated in human skin and animal models can be very different, especially when it comes to follicular biology,” says Christos Tziotzios at King’s College London. Similar serums are also in development, with one, based on plant extracts, boosting hair growth on people within weeks.
Nevertheless, Tziotzios says the latest study adds to our understanding of hair loss and growth. “We knew that adipocytes played a role in hair follicle genesis, but this is the first time I’ve seen it used for regeneration,” he says. It may also explain why some people experience hair growth after microneedling, he says, which involves fine needles being rolled over the scalp, making tiny punctures that trigger an immune response.
Amsterdam, Rome, Paris, 23 October 2025 – Airbus (stock exchange symbol: AIR), Leonardo (Borsa Italiana: LDO) and Thales (Euronext Paris: HO) have signed a Memorandum of Understanding (“MoU”) aimed at combining their respective space activities into a new company.
By joining forces, Airbus, Leonardo and Thales aim to strengthen Europe’s strategic autonomy in space, a major sector that underpins critical infrastructure and services related to telecommunications, global navigation, earth observation, science, exploration and national security. This new company also intends to serve as the trusted partner for developing and implementing national sovereign space programmes.
This new company will pool, build and develop a comprehensive portfolio of complementary technologies and end-to-end solutions, from space infrastructure to services (excluding space launchers). It will accelerate innovation in this strategic market, in order to create a unified, integrated and resilient European space player, with the critical mass to compete globally and grow on the export markets.
[…]
Airbus will contribute with its Space Systems and Space Digital businesses, coming from Airbus Defence and Space.
Leonardo will contribute with its Space Division, including its shares in Telespazio and Thales Alenia Space.
Thales will mainly contribute with its shares in Thales Alenia Space, Telespazio, and Thales SESO.
The combined entity will employ around 25,000 people across Europe. With an annual turnover of about 6.5bn€ (end of 2024, pro-forma) and an order backlog representing more than three years of projected sales, this new company will form a robust, innovative and competitive entity worldwide.
Ownership of the new company will be shared among the parent companies, with Airbus, Leonardo and Thales owning respectively 35%, 32.5% and 32.5% stakes. It will operate under joint control, with a balanced governance structure among shareholders.
Apple could face claims estimated at around £1.5 billion after it lost a collective case in the UK arguing that its closed systems for apps resulted in overcharging businesses and consumers.
The ruling from a Competition Appeal Tribunal responded to the case brought on behalf of 36 million UK iPhone and iPad users, both consumers and enterprise customers.
Apple said it disagreed with the ruling [PDF] and planned to appeal.
The court found Apple had imposed charges for its iOS app distribution services and its in-app payment service charged developers a headline commission rate of 30 percent.
In a unanimous judgment, the court found Apple overcharged developers as a result of its behavior in the iOS app distribution services market and the iOS in-app payment services market. There was also an overcharge resulting from the extent to which developers passed on the costs to iPhone and iPad users.
The court found those represented in the case, led by academic Dr Rachael Kent, could be eligible for 8 percent interest on damages awarded.
Speaking to the BBC, Kent said the decision was a “landmark victory, not only for App Store users, but for anyone who has ever felt powerless against a global tech giant.”
In a statement, Apple said the ruling’s view of its software marketplace was mistaken. It argued the App Store was good for UK businesses and consumers because it offered a space for developers to sell their work and somewhere users could choose from millions of software products.
“This ruling overlooks how the App Store helps developers succeed and gives consumers a safe, trusted place to discover apps and securely make payments. The App Store faces vigorous competition from many other platforms – often with far fewer privacy and security protections,” the tech giant said.
Which is quite funny for Apple to say, because it fights tooth and nail to ensure that there is no competition for the App Store. Even when the EU tells Apple it must enable alternate app stores or payment providers, it rolls around the floor like a child in a tantrum hoping to avoid the inevitable:
The feds on Thursday charged alleged mafia associates and current and former National Basketball Association players and coaches with running rigged poker games and illegal sports betting.
Starting around 2019, a group of alleged mafia associates began operating a high-stakes poker con at several locations around Manhattan, according to an indictment filed by the US Attorney for the Eastern District of New York. The card cheating scheme relied on X-ray tables, rigged card shufflers, and glasses capable of reading hidden card markings.
Authorities say they arrested 31 individuals across 11 states, including members and associates of the Bonanno, Gambino, and Genovese organized crime families of La Cosa Nostra.
Chauncey Billups, the head coach of the Portland Trail Blazers, and former Cleveland Cavaliers player and assistant coach Damon Jones were also arrested.
Billups’ attorney Chris Heywood told ESPN in a statement that his client did not do what the government claims and that Billups intends to fight the charges.
For years, these individuals allegedly hosted illegal poker games where they used sophisticated technology and enlisted current and former NBA players to cheat people out of millions of dollars
“For years, these individuals allegedly hosted illegal poker games where they used sophisticated technology and enlisted current and former NBA players to cheat people out of millions of dollars,” said NYPD Commissioner Jessica S. Tisch in a statement.
“This complex scheme was so far reaching that it included members from four of the organized crime families, and when people refused to pay because they were cheated, these defendants did what organized crime has always done: they used threats, intimidation, and violence.”
As described in the indictment, the victimized card players believed they were participating in fair but illegal poker games against other players. However, the games were rigged, resulting in a loss of at least $7 million since the scheme’s inception. The NBA celebrities supposedly served as “Face Cards” to attract players.
“The defendants and their co-conspirators, who constituted the remaining participants purportedly playing in the poker games, worked together on cheating teams … that used advanced wireless technologies to read the cards dealt in each poker hand and relay that information to the defendants and co-conspirators participating in the illegal poker games,” the indictment claims.
The cheating scheme allegedly employed compromised shuffling machines that could read the cards in the deck and transmit this information to an off-site relayer who messaged the details back to a player at the table, referred to as the “Quarterback” or “Driver.” This individual then used prearranged signals to communicate with co-conspirators at the table, all to win poker games against unsuspecting victims.
The defendants also allegedly employed “a chip tray analyzer (essentially, a poker chip tray that also secretly read all cards using hidden cameras), an X-ray table that could read cards face down on the table, and special contact lenses or eyeglasses that could read pre-marked cards.”
[…]
Online poker games have long presented a risk of cheating and player collusion, but this incident reaffirms that in-person games, where collusion has always been a possibility, can also be subverted through technology.
“I think the sophistication in the cheating technologies is far greater than the sophistication in detection, and it’s not very common for people to even have expensive detection technology,” said Rubin. “You’re not, as a player, equipped to compete in a way with the people that have the resources to cheat like that.”
Major Las Vegas casinos like the MGM Grand or Caesars Palace, Rubin said, put a lot of money and effort into protecting games at their facilities and have an interest in preventing cheating scandals from tarnishing their brands. “You’re probably safe playing in big, brand name casinos,” he said. “But at the end of the day, you know, it’s poker and if somebody wants to try hard enough and spends money to do it, they may find a way to cheat.
[…]
The second of the two indictments alleged that six defendants, including Miami Heat guard Terry Rozier and former NBA assistant coach and player Damon Jones (named in the first indictment), colluded to share inside information and to alter in-game behavior to influence the outcome of bets on NBA games.
New NATO member Sweden is boosting support to Ukraine, with a letter of intent signed this week on the sale of up to 150 Gripen fighter jets. Shortly after joining NATO in March 2024 and bringing an end to two centuries of military non-alignment, Sweden approved a €989 million military support package that included Archer self-propelled artillery systems and long-range drones.
Its latest contribution to the war effort is Glimt, an innovative project launched by the Swedish Defence Research Agency (FOI) earlier this year. Glimt is an open platform that relies on the theory of “crowd forecasting”: a method of making predictions based on surveying a large and diverse group of people and taking an average. “Glimt” is a Swedish word for “a glimpse” or “a sudden insight”. The theory posits that the average of all collected predictions produces correct results with “uncanny accuracy”, according to the Glimt website. Such “collective intelligence” is used today for everything from election results to extreme weather events, Glimt said.
[…]
Group forecasting allows for a broad collection of information while avoiding the cognitive bias that often characterises intelligence services. Each forecaster collects and analyses the available information differently to reach the most probable scenario and can add a short comment to explain their reasoning. The platform also encourages discussion between members so they can compare arguments and alter their positions.
Available in Swedish, French and English, the platform currently has 20,000 registered users; each question attracts an average of 500 forecasters. Their predictions are later sent to statistical algorithms that cross-reference data, particularly the relevance of the answers they provided. The most reliable users will have a stronger influence on the results; this reinforces the reliability of collective intelligence.
When the microcomputer first landed in homes some forty years ago, it came with a simple freedom—you could run whatever software you could get your hands on. Floppy disk from a friend? Pop it in. Shareware demo downloaded from a BBS? Go ahead! Dodgy code you wrote yourself at 2 AM? Absolutely. The computer you bought was yours. It would run whatever you told it to run, and ask no questions.
Today, that freedom is dying. What’s worse, is it’s happening so gradually that most people haven’t noticed we’re already halfway into the coffin.
News? Pegged.
There are always security risks when running code from untrusted sources. The stakes are higher these days when our computers are the gateways to our personal and financial lives.
The latest broadside fired in the war against platform freedom has been fired. Google recently announced new upcoming restrictions on APK installations. Starting in 2026, Google will tightening the screws on sideloading, making it increasingly difficult to install applications that haven’t been blessed by the Play Store’s approval process. It’s being sold as a security measure, but it will make it far more difficult for users to run apps outside the official ecosystem. There is a security argument to be made, of course, because suspect code can cause all kinds of havoc on a device loaded with a user’s personal data. At the same time, security concerns have a funny way of aligning perfectly with ulterior corporate motives.
It’s a change in tack for Google, which has always had the more permissive approach to its smartphone platform. Contrast it to Apple, which has sold the iPhone as a fully locked-down device since day one. The former company said that if you own your phone, you could do what you want with it. Now, it seems Google is changing its mind ever so slightly about that. There will still be workarounds, like signing up as an Android developer and giving all your personal ID to Google, but it’s a loss to freedom whichever way you look at it.
Beginnings
Sony put a great deal of engineering into the PlayStation to ensure it would only read Sony-approved discs. Modchips sprung up as a way to get around that problem, albeit primarily so owners could play cheaper pirated games. Credit: Libreleah, CC BY-SA 4.0,
The walled garden concept didn’t start with smartphones. Indeed, video game consoles were a bit of a trailblazer in this space, with manufacturers taking this approach decades ago. The moment gaming became genuinely profitable, console manufacturers realized they could control their entire ecosystem. Proprietary formats, region systems, and lockout chips were all valid ways to ensure companies could levy hefty licensing fees from developers. They locked down their hardware tighter than a bank vault, and they did it for one simple reason—money. As long as the manufacturer could ensure the console wouldn’t run unapproved games, developers would have to give them a kickback for every unit sold.
By and large, the market accepted this. Consoles were single-purpose entertainment machines. Nobody expected to run their own software on a Nintendo, after all. The deal was simple—you bought a console from whichever company, and it would only play whatever they said was okay. The vast majority of consumers didn’t care about the specifics. As long as the console in question had a decent library, few would complain.
Nintendo created the 10NES copy protection system to ensure its systems would only play games approved by the company itself, in an attempt to exert quality control after the 1983 North American video game crash. Credit: Evan-Amos, public domain
There was always an underground—adapters to work around region locks, and bootleg games that relied on various hacks—with varying popularity over the years. Often, it was high prices that drove this innovation—think of the many PlayStation mod chips sold to play games off burnt CDs to avoid paying retail.
At the time, this approach largely stayed within the console gaming world. It didn’t spread to actual computers because computers were tools. You didn’t buy a PC to consume content someone else curated for you. You bought it to do whatever you wanted—write a novel, make a spreadsheet, play games, create music, or waste time on weird hobby projects. The openness wasn’t a bug, or even something anybody really thought about. It was just how computers were. It wasn’t just a PC thing, either—every computer on the market let you run what you wanted! It wasn’t just desktops and laptops, either; the nascent tablets and PDAs of the 1990s operated in just the same way.
Then came the iPhone, and with it, the App Store. Apple took the locked-down model and applied it to a computer you carry in your pocket. The promise was that you’d only get apps that were approved by Apple, with the implicit guarantee of a certain level of quality and functionality.
Apple is credited with pioneering the modern smartphone, and in turn, the walled garden that is the App Store. Credit: Apple
It was a bold move, and one that raised eyebrows among developers and technology commentators. But it worked. Consumers loved having access to a library of clean and functional apps, built right into the device. Meanwhile, they didn’t really care that they couldn’t run whatever kooky app some random on the Internet had dreamed up.
Apple sold the walled garden as a feature. It wasn’t ashamed or hiding the fact—it was proud of it. It promised apps with no viruses and no risks; a place where everything was curated and safe. The iPhone’s locked-down nature wasn’t a restriction; it was a selling point.
But it also meant Apple controlled everything. Every app paid Apple’s tax, and every update needed Apple’s permission. You couldn’t run software Apple didn’t approve, full stop. You might have paid for the device in your pocket, but you had no right to run what you wanted on it. Someone in Cupertino had the final say over that, not you.
When Android arrived on the scene, it offered the complete opposite concept to Apple’s control. It was open source, and based on Linux. You could load your own apps, install your own ROMs and even get root access to your device if you wanted. For a certain kind of user, that was appealing. Android would still offer an application catalogue of its own, curated by Google, but there was nothing stopping you just downloading other apps off the web, or running your own code.
Sadly, over the years, Android has been steadily walking back that openness. The justifications are always reasonable on their face. Security updates need to be mandatory because users are terrible at remembering to update. Sideloading apps need to come with warnings because users will absolutely install malware if you let them just click a button. Root access is too dangerous because it puts the security of the whole system and other apps at risk. But inch by inch, it gets harder to run what you want on the device you paid for.
Windows Watches and Waits
The walled garden has since become a contagion, with platforms outside the smartphone space considering the tantalizing possibilities of locking down. Microsoft has been testing the waters with the Microsoft Store for years now, with mixed results. Windows 10 tried to push it, and Windows 11 is trying harder. The store apps are supposedly more secure, sandboxed, easier to manage, and straightforward to install with the click of a button.
Microsoft has tried multiple times to sell versions of Windows that are locked to exclusively run apps from the Microsoft Store. Thus far, these attempts have been commercial failures.
Microsoft hasn’t pulled the trigger on fully locking down Windows. It’s flirted with the idea, but has seen little success. Windows RT and Windows 10 S were both locked to only run software signed by Microsoft—each found few takers. Desktop Windows remains stubbornly open, capable of running whatever executable you throw at it, even if it throws up a few more dialog boxes and question marks with every installer you run these days.
How long can this last? One hopes a great while yet. A great deal of users still expect a computer—a proper one, like a laptop or desktop—to run whatever mad thing they tell it to. However, there is an increasing userbase whose first experience of computing was in these locked-down tablet and smartphone environments. They aren’t so demanding about little things like proper filesystem access or the ability to run unsigned code. They might not blink if that goes away.
For now, desktop computing has the benefit of decades of tradition built in to it. Professional software, development tools, and specialized applications all depend on the ability to install whatever you need. Locking that down would break too many workflows for too many important customers. Masses of scientific users would flee to Linux the moment their obscure datalogger software couldn’t afford an official license to run on Windows;. Industrial users would baulk at having to rely on a clumsy Microsoft application store when bringing up new production lines.
Apple had the benefit that it was launching a new platform with the iPhone; one for which there were minimal expectations. In comparison, Microsoft would be climbing an almighty mountain to make the same move on the PC, where the culture is already so established. Apple could theoretically make moves in that direction with OS X and people would be perhaps less surprised, but it would still be company making a major shift when it comes to customer expectations of the product.
Here’s what bothers me most: we’re losing the idea that you can just try things with computers. That you can experiment. That you can learn by doing. That you can take a risk on some weird little program someone made in their spare time. All that goes away with the walled garden. Your neighbour can’t just whip up some fun gadget and share it with you without signing up for an SDK and paying developer fees. Your obscure game community can’t just write mods and share content because everything’s locked down. So much creativity gets squashed before it even hits the drawing board because it’s just not feasible to do it.
It’s hard to know how to fight this battle. So much ground has been lost already, and big companies are reluctant to listen to the esoteric wishers of the hackers and makers that actually care about the freedom to squirt whatever through their own CPUs. Ultimately, though, you can still vote with your wallet. Don’t let Personal Computing become Consumer Computing, where you’re only allowed to run code that paid the corporate toll. Make sure the computers you’re paying for are doing what you want, not just what the executives approved of for their own gain. It’s your computer, it should run what you want it to!
[…] “Dietary modifications could be a new, natural and cost-effective approach to achieve better sleep,[ …]
Previous studies have shown that getting too little sleep can drive people toward unhealthier eating patterns, often higher in fat and sugar. Yet, despite how sleep influences well-being and productivity, scientists have known far less about the reverse — how diet affects sleep itself.
While earlier research linked greater fruit and vegetable intake with people reporting better sleep, this study was the first to show a same-day relationship between diet and objectively measured sleep quality.
[…]
The scientists analyzed a measure called “sleep fragmentation,” which captures how often a person wakes up or shifts between lighter and deeper stages of sleep during the night.
What the Researchers Found
The results showed that daily eating habits were strongly connected to how well participants slept that night. Those who ate more fruits and vegetables — and consumed more complex carbohydrates such as whole grains — experienced longer periods of deep, undisturbed sleep.
According to the team’s analysis, people who met the CDC recommendation of five cups of fruits and vegetables per day could see an average 16 percent improvement in sleep quality compared with those who ate none.
“16 percent is a highly significant difference,” Tasali said. “It’s remarkable that such a meaningful change could be observed within less than 24 hours.”
[…]
Story Source:
Materials provided by University of Chicago Medical Center. Note: Content may be edited for style and length.
Journal Reference:
Hedda L. Boege, Katherine D. Wilson, Jennifer M. Kilkus, Waveley Qiu, Bin Cheng, Kristen E. Wroblewski, Becky Tucker, Esra Tasali, Marie-Pierre St-Onge. Higher daytime intake of fruits and vegetables predicts less disrupted nighttime sleep in younger adults. Sleep Health, 2025; 11 (5): 590 DOI: 10.1016/j.sleh.2025.05.003
Researchers at Trinity College Dublin have uncovered what they call a “universal thermal performance curve” (UTPC), a pattern that appears to apply to every living species on Earth. This curve describes how organisms respond to changes in temperature, and it seems to hold true across the entire spectrum of life. According to the scientists, the UTPC effectively “shackles evolution” because no species appears capable of escaping its influence on how temperature affects biological performance.
[…]
Rising Heat and Falling Performance
The study revealed a consistent trend in how organisms respond to warmth:
Performance increases gradually as temperature rises until reaching a peak (the optimum point).
Beyond this optimum, performance drops sharply.
When temperatures climb too high, overheating can cause physiological breakdown or death.
These findings, published in the journal PNAS, suggest that species may face greater limits than previously thought when adapting to global climate change. As most regions continue to warm, the window of viable performance for many species could shrink.
One Curve, Many Temperatures
Andrew Jackson, Professor in Zoology in Trinity’s School of Natural Sciences, and co-author,said: “Across thousands of species and almost all groups of life including bacteria, plants, reptiles, fish and insects, the shape of the curve that describes how performance changes with temperature is very similar. However, different species have very different optimal temperatures, ranging from 5oC to 100oC, and their performance can vary a lot depending on the measure of performance being observed and the species in question.”
“That has led to countless variations on models being proposed to explain these differences. What we have shown here is that all the different curves are in fact the same exact curve, just stretched and shifted over different temperatures. And what’s more, we have shown that the optimal temperature and the critical maximum temperature at which death occurs are inextricably linked.”
“Whatever the species, it simply must have a smaller temperature range at which life is viable once temperatures shift above the optimum.”
[…]
Searching for the Exceptions
“The next step is to use this model as something of a benchmark to see if there are any species or systems we can find that may, subtly, break away from this pattern. If we find any, we will be excited to ask why and how they do it — especially given forecasts of how our climate is likely to keep warming in the next decades.”
Story Source:
Materials provided by Trinity College Dublin. Note: Content may be edited for style and length.
Journal Reference:
Jean-François Arnoldi, Andrew L. Jackson, Ignacio Peralta-Maraver, Nicholas L. Payne. A universal thermal performance curve arises in biology and ecology. Proceedings of the National Academy of Sciences, 2025; 122 (43) DOI: 10.1073/pnas.2513099122
Networking researcher Christoff Visser has found that Apple devices cause Wi-Fi networks to “jitter” due to traffic generated by the Apple Wireless Direct Link (AWDL) tech that powers the peer-to-peer AirDrop filesharing tool.
Visser presented his findings on Tuesday at the RIPE 91 conference, the biannual internetworking event organized by RIPE NCC, the regional internet registry for Europe, the Middle East and parts of Central Asia. In his talk, titled “Apple Wireless Direct Link: Apple’s Network Magic or Misery,” Visser explained that while using a new iPad he often encountered what he described as “very strange rhythmic stuttering” as he streamed audio to the device.
He used the Moonlight streaming test tool to investigate and found 20 millisecond latency, but with a 25 millisecond variance he felt was oddly high for the uncontested environment that is a local network. He next used Steam’s network testing tool, and found latency regularly bounced between three and 90 milliseconds. PING commands produced similar results, as did tests on different devices.
At this point, Visser felt confident his hardware and applications were not the reason for his streams stuttering.
Visser, who works at Japan’s IIJ Research Lab, dug into the situation and found AWDL constantly listens for requests to use AirDrop, and prefers to use certain “social” Wi-Fi channels – channel 6 for 2.4 GHz networks channels 44 and 149 for 5 GHz Wi-Fi.
As a networking engineer, Visser chose to use empty channels.
“It’s a big mistake,” he told the conference. “What ends up happening is that if you are not in one of these social channels, you get this periodic Wi-Fi channel swapping where it goes to the social channel, listens in [if] anybody wants to talk to it and swaps back to create very rhythmic stuttering.”
Visser suggested one way to avoid the issue is not to use AWDL but acknowledged that doing so means users of Apple devices will have to do without AirDrop and other Cupertino tricks like using an iPad as an external monitor for a Mac or mirroring an iPhone screen.
He doesn’t think cutting users off from those services is practical.
“There’s approximately over 1.5 billion other iPhone users in the world and are you really going to tell your users in your network ‘Don’t use the features on these Apple devices’. It’s not really a solution.
“The other option is to do the Apple way of networking, so for the best experience you use the same Wi-Fi channels as everybody else, or you will suffer from jitter at some point.”
He ended his talk by expressing his concerns about Apple’s ecosystem.
“There’s a lot of convenience, as I described,” he said. “The question is really: Is this convenience worth disruption?”
His answer was “For most things sure, it doesn’t matter too much.”
But he feels it will matter to more people in future.
“Cloud gaming and remote gaming is growing bigger and bigger and they are trying to push high fidelity, bigger bit rate, if you are trying to do 4k HDR at 120 FPS, yes you are going to start to feel these delays and packet loss more and more.”
“It makes me uncomfortable because it really promotes bad network practices like not using the best channels to actually improve your end user experience,” he added.
He therefore grudgingly recommended using the Wi-Fi channels Apple uses, and expressed his hope that any folks from ISPs in the audience can learn from his experience so that if their customers experience network jitters they now have an explanation.
Data-center developers are running into a severe power bottleneck as they rush to build bigger facilities to capitalize on generative AI’s potential. Normally, they would power these centers by connecting to the grid or building a power plant onsite. However, they face major delays in either securing gas turbines or in obtaining energy from the grid.
At the Data Center World Power show in San Antonio in October, natural-gas power provider ProEnergy revealed an alternative—repurposed aviation engines. According to Landon Tessmer, vice president of commercial operations at ProEnergy, some data centers are using his company’s PE6000 gas turbines to provide the power needed during the data center’s construction and during its first few years of operation. When grid power is available, these machines either revert to a backup role, supplement the grid, or are sold to the local utility.
“We have sold 21 gas turbines for two data-center projects amounting to more than 1 gigawatt,” says Tessmer. “Both projects are expected to provide bridging power for five to seven years, which is when they expect to have grid interconnection and no longer need permanent behind-the-meter generation.”
[…]
It is a common and long-established practice for gas-turbine original equipment manufacturers (OEMs) like GE Vernova and Siemens Energy to convert a successful aircraft engine for stationary electric-power generation applications. Known as aeroderivative gas turbines[…] “It takes a lot to industrialize an aviation engine and make it generate power,” […] To make it suitable for power generation, it needed an expanded turbine section to convert engine thrust into shaft power, a series of struts and supports to mount it on a concrete deck or steel frame, and new controls. Further modifications typically include the development of fuel nozzles that let the machine run on natural gas rather than aviation fuel, and a combustor that minimizes the emission of nitrogen oxides, a major pollutant.
[…]
ProEnergy buys and overhauls used CF6-80C2 engine cores—the central part of the engine where combustion occurs—and matches them with newly manufactured aeroderivative parts made either by ProEnergy or its partners. After assembly and testing, these refurbished engines are ready for a second life in electric-power generation, where they provide 48 megawatts, enough to power a small-to-medium data center (or a town of perhaps 20,000 to 40,000 households). According to Tessmer, approximately 1,000 of these aircraft engines are expected to be retired over the next decade, so there’s no shortage of them. A large data center may have demand that exceeds 100 MW, and some of the latest data centers being designed for AI are more than 1 GW.
[…]
ProEnergy sells two-turbine blocks with the standard configuration. It consists of gas turbines, generators, and a host of other gear, such as systems to cool the air entering the turbine during hot days as a way to boost performance, selective catalytic reduction systems to reduce emissions, and various electrical systems.
[…] The Milky Way is anything but static. It rotates and it wobbles, and new observations from the European Space Agency’s Gaia space telescope now reveal another motion, a giant wave moving outward from the galaxy’s centre.
For roughly a century, astronomers have known that stars orbit the galactic centre, and Gaia has mapped their speeds and paths. Since the 1950s, researchers have recognized that the Milky Way’s disc is warped. In 2020, Gaia showed that this disc also wobbles over time, similar to a spinning top.
It is now clear that a vast ripple influences stellar motions across distances of tens of thousands of light-years from the Sun. Like waves spreading from a stone dropped into a pond, this stellar ripple spans a large stretch of the Milky Way’s outer disc.
The European Space Agency’s (ESA) Gaia space telescope has revealed that our Milky Way galaxy has a giant wave rippling outwards from its center. In the left image, we look at our galaxy from ‘above’. On the right, we see across a vertical slice of the galaxy and look at the wave side-on. In this perspective, the Sun is located between the line of sight and the bulge of the galaxy. This perspective also reveals that the ‘left’ side of the galaxy curves upward and the other side curves downward (this is the warp of the disc). The newly discovered wave is indicated in red and blue: in red areas, the stars lie above, and in blue areas the stars lie below the warped disc of the galaxy. Credit: ESA/Gaia/DPAC, S. Payne-Wardenaar, E. Poggio et al (2025)
The unexpected galactic ripple is illustrated in this figure above. Here, the positions of thousands of bright stars are shown in red and blue, overlaid on Gaia’s maps of the Milky Way.
In the left image, we look at our galaxy from ‘above’. On the right, we see across a vertical slice of the galaxy and look at the wave side-on. This perspective reveals that the ‘left’ side of the galaxy curves upward and the ‘right’ side curves downward (this is the warp of the disc). The newly discovered wave is indicated in red and blue: in red areas, the stars lie above, and in blue areas, the stars lie below the warped disc of the galaxy.
[…]
The Scale of the Wave
From these, we can see that the wave stretches over a huge portion of the galactic disc, affecting stars around at least 30–65 thousand light-years away from the center of the galaxy (for comparison, the Milky Way is around 100 thousand light-years across).
The great wave could also be related to a smaller-scale rippling motion seen 500 light-years from the Sun and extending over 9000 light-years, the so-called Radcliffe Wave.
“However, the Radcliffe Wave is a much smaller filament, and located in a different portion of the galaxy’s disc compared to the wave studied in our work (much closer to the Sun than the great wave). The two waves may or may not be related. That’s why we would like to do more research,” Eloisa adds.
“The upcoming fourth data release from Gaia will include even better positions and motions for Milky Way stars, including variable stars like Cepheids. This will help scientists to make even better maps, and thereby advance our understanding of these characteristic features in our home galaxy,” says Johannes Sahlmann, ESA’s Gaia Project Scientist.
Reference: “The great wave – Evidence of a large-scale vertical corrugation propagating outwards in the Galactic disc” by E. Poggio, S. Khanna, R. Drimmel, E. Zari, E. D’Onghia, M. G. Lattanzi, P. A. Palicio, A. Recio-Blanco and L. Thulasidharan, 14 July 2025, Astronomy & Astrophysics. DOI: 10.1051/0004-6361/202451668
Geographic atrophy due to age-related macular degeneration (AMD) is the leading cause of irreversible blindness and affects more than 5 million persons worldwide. No therapies to restore vision in such persons currently exist. The photovoltaic retina implant microarray (PRIMA) system combines a subretinal photovoltaic implant and glasses that project near-infrared light to the implant in order to restore sight to areas of central retinal atrophy.
Methods
We conducted an open-label, multicenter, prospective, single-group, baseline-controlled clinical study in which the vision of participants with geographic atrophy and a visual acuity of at least 1.2 logMAR (logarithm of the minimum angle of resolution) was assessed with PRIMA glasses and without PRIMA glasses at 6 and 12 months. The primary end points were a clinically meaningful improvement in visual acuity (defined as ≥0.2 logMAR) from baseline to month 12 after implantation and the number and severity of serious adverse events related to the procedure or device through month 12.
Results
A total of 38 participants received a PRIMA implant, of whom 32 were assessed at 12 months. Of the 6 participants who were not assessed, 3 had died, 1 had withdrawn, and 2 were unavailable for testing. Among the 32 participants who completed 12 months of follow-up, the PRIMA system led to a clinically meaningful improvement in visual acuity from baseline in 26 (81%; 95% confidence interval, 64 to 93; P<0.001). Using multiple imputation to account for the 6 participants with missing data, we estimated that 80% (95% CI, 66 to 94; P<0.001) of all participants would have had a clinically meaningful improvement at 12 months. A total of 26 serious adverse events occurred in 19 participants. Twenty-one of these events (81%) occurred within 2 months after surgery, of which 20 (95%) resolved within 2 months after onset. The mean natural peripheral visual acuity after implantation was equivalent to that at baseline.
Conclusions
In this study involving 38 participants with geographic atrophy due to AMD, the PRIMA system restored central vision and led to a significant improvement in visual acuity from baseline to month 12. (Funded by Science Corporation and the Moorfields National Institute for Health and Care Research Biomedical Research Centre; PRIMAvera ClinicalTrials.gov number, NCT04676854.)
Amazon Web Services (AWS) is currently experiencing a major outage that has taken down online services, including Amazon, Alexa, Snapchat, Fortnite, and more. The AWS status checker is reporting that multiple services are “impacted” by operational issues, and that the company is “investigating increased error rates and latencies for multiple AWS services in the US-EAST-1 Region” — though outages are also impacting services in other regions globally.
Users on Reddit are reporting that the Alexa smart assistant is down and unable to respond to queries or complete requests, and in my own experience, I found that routines like pre-set alarms are not functioning. The AWS issue also appears to be impacting platforms running on its cloud network, including Perplexity, Airtable, Canva, and the McDonalds app. The cause of the outage hasn’t been confirmed, and it’s unclear when regular service will be restored.
“Perplexity is down right now,” Perplexity CEO Aravind Srinivas said on X. “The root cause is an AWS issue. We’re working on resolving it.”
The AWS dashboard first reported issues affecting the US-EAST-1 Region at 3:11AM ET. “We are actively engaged and working to both mitigate the issue and understand root cause. We will provide an update in 45 minutes, or sooner if we have additional information to share,” Amazon said in an update published at 3:51AM ET.
The service provides cloud-computing and API services to major websites, popular apps, and platforms across the world. It means that users have been experiencing issues across a huge swath of the internet as the UK starts its working week.
[…]
We will be keeping an updating list of websites, apps, games, and more than are impacted. It includes:
Windows Recovery Environment (RE), as the name suggests, is a built-in set of tools inside Windows that allow you to troubleshoot your computer, including booting into the BIOS, or starting the computer in safe mode. It’s a crucial piece of software that has now, unfortunately, been rendered useless (for many) as part of the latest Windows update. A new bug discovered in Windows 11’s October build, KB5066835, makes it so that your USB keyboard and mouse stop working entirely, so you cannot interact with the recovery UI at all.
This problem has already been recognized and highlighted by Microsoft, who clarified that a fix is on its way to address this issue. Any plugged-in peripherals will continue to work just fine inside the actual operating system, but as soon as you go into Windows RE, your USB keyboard and mouse will become unresponsive. It’s important to note that if your PC fails to start-up for any reason, it defaults to the recovery environment to, you know, recover and diagnose any issues that might’ve been preventing it from booting normally.
Note that those hanging onto old PS/2-connector equipped keyboards and mice seem to be unaffected by this latest Windows software gaffe.