The Linkielist

Linking ideas with the world

The Linkielist

Innovative aviation projects cleared for take off – UK invests paltry $4.4m in 14 projects.

An investment of £4.4 million across 14 innovative aviation projects will support areas such as the NHS, emergency services and nature restoration in the UK.

Innovate UK, in partnership with the Department for Transport, has announced the latest group of projects to receive funding from the Future Flight Programme.

The programme encourages the innovative use of aviation technologies to support a variety of challenges in the UK, including:

  • medical supply chains
  • protecting national infrastructure
  • agricultural restoration

Project ambitions

Eight of the projects are for strategic growth, to demonstrate progress towards commercialisation.

These focus on real-world operations proving use cases in a variety of sectors, from agriculture to healthcare and provide tangible insights to support regulatory development in key areas.

Six of the projects are regional demonstrators, which have been funded to enable local areas across the UK to plan for the adoption and integration of drones.

This includes passenger carrying eVTOL (Electric Vertical Take-off and Landing) and zero emission conventional aircraft.

[…]

Full list of funded projects

Strategic Growth projects

Advanced Logistics BVLOS UAV Mission (ALBUM)

Partners include:

  • ARC Aerosystems
  • Highlands and Islands Transport Partnership
  • Acroflight
  • Scubatx

This project will test a large, uncrewed aerial vehicle (UAV) in Beyond Visual Line of Sight (BVLOS) operations.

This was a key step towards commercialisation of ARC’s heavy cargo for mid-mile logistics with up to 100kg payload and flying long distances of up to 400km.

It aims to revolutionise logistics and medical transport in remote areas, such as the Scottish Highlands and Islands.

ALIAS II: Regulatory Policy Concepts Enabling Integrated Traffic Management (ITM)

Partners include:

  • Volant Autonomy
  • Snowdonia Aerospace Centre
  • Planefinder
  • Draken Europe
  • DroneCloud

This project aims to demonstrate an ITM system that will allow drones, air taxis, and traditional crewed aircraft to safely operate together in the same airspace.

It will use a combination of simulations and real-world flight trials of an advanced Detect and Avoid capability at the Snowdonia Aerospace Centre.

Beyond Restoration

Partners include:

  • Autospray Systems
  • National Trust
  • Woodland Trust
  • North Pennines National Landscape
  • Skypointe

This project aims to deploy a fleet of drones to apply lime, native seed mixes, fertiliser and tree seeds across ecologically significant sites in England, Wales and Scotland.

It offers an innovative, scalable alternative to manual spreading, using heavy-lift drones operating BVLOS to deliver restoration materials over remote and degraded land.

Containment with Confidence

Partners include:

  • Flare Bright
  • RPAS Heroes
  • National Gas Transmission
  • Satellite Applications Catapult

This project aims to help National Gas improve how it monitors the safety of its pipelines by replacing periodic helicopter inspections with a more efficient and environmentally friendly drone-based system.

By moving from helicopters to BVLOS drones, this project will enable National Gas to reduce its carbon emissions and demonstrate that drone-based systems can be harnessed to improve UK energy security and infrastructure monitoring.

“Dragon’s Heart”: A Welsh Medical Drone Delivery Network (MDDN)

Partners include:

  • Snowdonia Aerospace
  • Volant Autonomy
  • Skyports Deliveries
  • SLiNK-TECH

This project is building a Welsh MDDN to increase NHS operational flexibility and improve connectivity for all health and social care providers across Wales.

Drone as a First Responder

Partners include:

  • Idroneinnovations
  • SLiNK-TECH
  • Leading Edge Power
  • Thames Valley Police
  • Hampshire and Isle of Wight Constabulary

This project is developing advanced automated drone systems to improve the safety, speed and cost efficiency of infrastructure inspections, emergency response and public safety operations.

Its modular, adaptable platform will help organisations such as emergency services and infrastructure operators integrate drones into routine workflows more easily.

London Health Bridge Growth

Partners include:

  • Apian
  • Matternet UK

This project is an expansion of an existing medical drone delivery service trial, aiming to significantly increase the number of medical samples delivered by drone and create a multi-site logistics network for the NHS.

Scaling BVLOS Operations for Critical National Infrastructure (Project SOCNI)

Partners include:

  • DroneCloud
  • NATS
  • Network Rail
  • Transport for Wales
  • Railscape
  • British Transport Police

This project will create a structured approach to designing, deploying and testing safety mitigations across national infrastructures, to improve incident management and asset inspection in a real-world rail environment.

Regional Demonstrator projects

Future Air: Southwest

Partners include:

  • Daedal Research
  • Somerset Council
  • Isles of Scilly Skybus

This project aims to overcome the significant obstacles to using eVTOLs and Zero Emission Conventional Take-off and Landing for commercial purposes.

It will look at all the challenges at once, including those related to regulations, how the aircraft are operated, the money needed, and social acceptance.

By simultaneously evaluating the full range of challenges, it will develop solutions that enable scalable BVLOS drone capabilities.

OXCAM AAM Corridor

Partners include:

  • Skyports Infrastructure
  • Bristow Helicopters
  • NATS
  • Vertical Aerospace Group
  • Oxfordshire County Council

This project aims to demonstrate the commercial and operational viability of Advanced Air Mobility (AAM), like passenger and cargo services using eVTOLs, between Oxford and Cambridge.

This will test and identify real-world, commercially viable uses for this new technology, addressing the social and economic needs of the area.

The project will culminate in live demo flights of Vertical Aerospace’s VX4 aircraft from Skyports’ Bicester Vertiport.

Regional Offshore Cargo Drone Demonstrator

Partners include:

  • Flowcopter
  • AYR Logistics
  • Angus Council

This project aims to demonstrate how a new heavy-lift drone can be used for logistics and maintenance at offshore wind farms.

The project tackles a major problem for the wind energy industry which is the cost and difficulty of transporting equipment in bad weather.

By using a heavy-lift drone, the project will provide a safer, faster, and cheaper alternative, which is crucial for the efficient operation and maintenance of the UK’s offshore wind farms.

Project RESCUE

Partners include:

  • Somerset Council
  • Limosaero
  • Land and Minerals Consulting

This project is a collaboration between Somerset Council, emergency services and specialised drone companies.

Its main goal is to develop a minimum viable product for a sustainable drone-based service.

The project will focus on environmental monitoring to allow for rapid response to critical weather events.

Testing in real-world scenarios, including monitoring floods and assisting with search and rescue operations.

SATE: Highlands and Islands Regional Pathway to Sustainable Aviation

Partners include:

  • Highlands and Islands Transport Partnership
  • University of the Highlands and Islands
  • Urban Foresight
  • European Marine Energy Centre
  • Windracers
  • Skyports Deliveries
  • Hybrid Air Vehicles
  • Streamline Shipping Agencies
  • Cormorant SEAplanes
  • Cranfield Aerospace Solutions
  • Loganair
  • Regional and Business Airports Group
  • Shetland Islands Council

This project will develop a Regional Sustainable Aviation Strategy that outlines a clear roadmap for how new technologies can be put into service in the area.

It will not just focus on the technology itself but will also calculate the financial and social benefits that better air connectivity will bring to the region.

Project URBAN ASCENT

Partners include:

  • Coventry City Council
  • Skyfarer
  • Coventry University
  • SLiNK-TECH
  • Manufacturing Technology Centre
  • Altitude Angel
  • Odys Aviation

This project, based in Coventry and the West Midlands, aims to create a scalable plan for integrating drones and eVTOLs into UK cities.

By addressing the challenges of integrating drones and air taxis into a complex urban environment, it will lay the foundation for new services that can provide significant economic and social benefits.

This includes faster and more efficient transport of goods and people within cities.

Source: Innovative aviation projects cleared for take off – UKRI

4.4m in 14 projects ensures that they won’t really have enough money to make it. Hopefully this is the start of iterative funding though.

Windows MR Headsets Revived By Free ‘Oasis’ SteamVR Driver

A lone Microsoft employee released an unofficial SteamVR driver for Windows MR headsets, called Oasis, re-enabling their use on Windows 11.

The Oasis driver arrives just under one year after Microsoft started rolling out Windows 11 24H2, which completely removed support for Windows MR. This meant Acer, Asus, Dell, HP, Lenovo, and Samsung PC VR headset owners could no longer use their headset at all, not even on Steam, since Windows MR had its own runtime and only supported SteamVR through a shim.

Matthieu Bucchianeri’s Oasis solves this problem, for free. Oasis is a native SteamVR driver for Windows MR headsets, adding direct SteamVR support. No other software is required, except for SteamVR itself.

[…]

The Oasis driver includes full support for headset tracking, controller tracking, haptics, buttons, triggers, sticks, and battery state, as well as basic monoscopic camera passthrough. It also relays the IPD value from Reverb and Samsung Odyssey headsets, and even the eye tracking from HP Reverb G2 Omnicept Edition.

The only headset feature that isn’t supported is Bluetooth. Instead, you’ll need to use your PC’s Bluetooth, such as a USB or PCI-E Bluetooth adapter.

UploadVR’s Don Hopper has tested and confirmed that Oasis works with his HP Reverb G2, turning what had become a paperweight into a fully functional PC VR headset again.

Oasis Driver for Windows Mixed Reality is available for free on Steam. Make sure to read the full installation and setup instructions on GitHub, as you’ll need to pair your controllers via Bluetooth and “unlock” both the headset and controllers before use.

[…]

Source: Windows MR Headsets Revived By Free ‘Oasis’ SteamVR Driver

Toxic Fumes Are Leaking Into Airplanes, Sickening Crews and Passengers

[…] After months of worsening symptoms, Chesson was diagnosed with a traumatic brain injury and permanent damage to her peripheral nervous system caused by the fumes she inhaled. Her doctor, Robert Kaniecki, a neurologist and consultant to the Pittsburgh Steelers, said in an interview that the effects on her brain were akin to a chemical concussion and “extraordinarily similar” to those of a National Football League linebacker after a brutal hit. “It’s impossible not to draw that conclusion,” he said.

Kaniecki said he has treated about a dozen pilots and over 100 flight attendants for brain injuries after exposure to fumes on aircraft over the last 20 years. Another was a passenger, a frequent flier with Delta’s top-tier rewards status who was injured in 2023.

Chesson’s experience is one dramatic instance among thousands of so-called fume events reported to the Federal Aviation Administration since 2010, in which toxic fumes from a jet’s engines leak unfiltered into the cockpit or cabin. The leaks occur due to a design element in which air you breathe on an aircraft is pulled through the engine. The system, known as “bleed air,” has been featured in almost every modern commercial jetliner except Boeing’s 787.

The rate of incidents is accelerating in recent years, a Wall Street Journal investigation has found, driven in large part by leaks on Airbus’s bestselling A320 family of jets—the aircraft Chesson was flying.

The Journal’s reporting—based on a review of more than one million FAA and National Aeronautics and Space Administration reports, thousands of pages of documents and research papers and more than 100 interviews—shows that aircraft manufacturers and their airline customers have played down health risks, successfully lobbied against safety measures, and made cost-saving changes that increased the risks to crew and passengers.

The fumes—sometimes described as smelling of “wet dog,” “Cheetos” or “nail polish”—have led to emergency landings, sickened passengers and affected pilots’ vision and reaction times midflight, according to official reports.

Most odors in aircraft aren’t toxic, and neither are all vapors. The effects are often fleeting, mild or present no symptoms.

But they can also be longer-lasting and severe, according to doctors, medical records and affected crew members.

The cause of fume events isn’t a mystery. Airbus and Boeing, the two biggest aircraft manufacturers, have acknowledged that malfunctions can lead to oil and hydraulic fluid leaking into the engines or power units and vaporizing at extreme heat. This results in the release of unknown quantities of neurotoxins, carbon monoxide and other chemicals into the air.

[…]

Manufacturers, regulators and airlines have said these types of incidents are too infrequent, levels of contamination too low and scientific research on lasting health risks too inconclusive to warrant a comprehensive fix. In some cases, they have attributed reported health-effects from fume exposure to factors including hyperventilation, jet lag, psychological stress, mass hysteria and malingering.

Internally, industry staffers have flagged their own fears about the toxic makeup of engine oils.

[…]

The individual airlines mentioned in this article noted their commitment to the safety of their passengers and crew, and said they follow the protocols established by the FAA and the manufacturers of their planes.

[…]

The FAA on its website says the incidents are “rare” and cites a 2015 review that estimated a rate of “less than 33 events per million aircraft departures.” That rate would suggest a total of about 330 fume events on U.S. airlines last year.

In reality, the FAA received more than double that number of reports of fume events in 2024 from the 15 biggest U.S. airlines alone, according to the Journal’s analysis of service difficulty reports for flights between 2010 and early 2025. The rate has soared in recent years. In 2014, the Journal found about 12 fume events per million departures. By 2024, the rate had jumped to nearly 108. (Read more about how the Journal conducted its analysis.)

In a statement, the FAA attributed the increase in part to a change in its guidance for reporting fume events, although that revision was only implemented in November of last year.

[…]

The FAA doesn’t have a formal definition of a fume event and the service reports often don’t indicate the severity. In its review, the Journal mirrored the industry’s practice of relying on crew reports of specific odors and associated maintenance reports. Changes in crew awareness could impact reporting rates.

The actual rate is likely far higher, as crews don’t always report incidents to their airlines, which likewise don’t report all instances to the FAA. A review of internal data by the airline lobby International Air Transport Association, calculated a total rate of 800 per million departures in the U.S., according to an internal document from a member carrier.

The Journal’s analysis suggests that the growth is driven by the world’s bestselling aircraft: the Airbus A320. In 2024, among the three largest U.S. airlines with mixed fleets, the rate of reports on A320s had increased to more than seven times the rate on their Boeing 737 aircraft.

[…]

The Journal’s analysis shows incidents began climbing in 2016, the year Airbus started delivering its new A320neo, what would become the world’s fastest-selling model. It boasted a new generation of fuel-efficient engines, including one that was plagued by rapidly degrading seals meant to keep oil from leaking into the air supply.

Under pressure from airlines who complained that fume events were keeping aircraft out of service for up to days at a time, Airbus loosened maintenance rules, according to a review of internal documents and people familiar with the changes.

For example, under the old guidelines, Airbus typically required an inspection and deep-clean after a fume event. Under the revised rules, if the smell wasn’t strong and hadn’t occurred in the last 10 days, airlines wouldn’t need to take immediate action.

[…]

Source: Toxic Fumes Are Leaking Into Airplanes, Sickening Crews and Passengers

This sounds like the kind of health risk ignoring that went / goes on in Tobacco companies and impact sport head injury risk

Futurehome smart hub owners must pay new $117 subscription or lose access. Or use a different app (link on bottom) 

Smart home device maker Futurehome is forcing its customers’ hands by suddenly requiring a subscription for basic functionality of its products.

Launched in 2016, Futurehome’s Smarthub is marketed as a central hub for controlling Internet-connected devices in smart homes. For years, the Norwegian company sold its products, which also include smart thermostats, smart lighting, and smart fire and carbon monoxide alarms, for a one-time fee that included access to its companion app and cloud platform for control and automation. As of June 26, though, those core features require a 1,188 NOK (about $116.56) annual subscription fee, turning the smart home devices into dumb ones if users don’t pay up.

“You lose access to controlling devices, configuring; automations, modes, shortcuts, and energy services,” a company FAQ page says.

You also can’t get support from Futurehome without a subscription. “Most” paid features are inaccessible without a subscription, too, the FAQ from Futurehome, which claims to be in 38,000 households, says.

After June 26, customers had four weeks to continue using their devices as normal without a subscription. That grace period recently ended, and users now need a subscription for their smart devices to work properly.

[…]

The indebted company promised customers that the subscription fee would allow it to provide customers “better functionality, more security, and higher value in the solution you have already invested in,” reported Elektro247, a Norwegian news site covering the electrical industry, according to a Google-provided translation.

The problem is that customers expected a certain level of service and functionality when they bought Futurehome devices. And as of press time, Futurehome’s product pages don’t make the newfound subscription requirements apparent. Futurehome’s recent bankruptcy is also a reminder of the company’s instability, making further investments questionable.

[…]

Futurehome has fought efforts to crack its firmware, with CEO Øyvind Fries telling Norwegian consumer tech website Tek.no, per a Google translation, “It is regrettable that we now have to spend time and resources strengthening the security of a popular service rather than further developing functionality for the benefit of our customers.”

Futurehome’s move has become a common strategy among Internet of Things companies, including smart home hub maker Wink. These companies are still struggling to build sustainable businesses that work long-term without killing features or upcharging customers.

Source: Futurehome smart hub owners must pay new $117 subscription or lose access – Ars Technica

And you see this happening a lot with all kinds of companies. The thing is, these products are supposed to work without contacting a central server – the company selling you this is not supposed to be seeing or handling your data at all. They don’t need to, as it’s all in your home and the functionalities don’t require huge compute power.

Fortunately, the Futurehome Home Assistant add-on (on Github) is a complete drop-in replacement for the official Futurehome app, with support for all device types compatible with the Futurehome hub. See the FAQ for more details. – which means you can operate the stuff you bought without the subscription

Better Airplane Navigation Using Quantum Sensing of a map of the Earth’s Crust

Airbus’s Silicon Valley-based innovation center, Acubed, and artificial intelligence and quantum-focused Google spinout SandboxAQ are on a mission to demonstrate an alternate way. It involves a small, toaster-size box, lasers, a single GPU chip and a deep knowledge of the Earth’s magnetic field.
The technology, known as quantum sensing, has been in development for decades at a number of companies and is now inching closer to commercialization in aerospace.

SandboxAQ’s MagNav quantum-sensing device.

Acubed recently took MagNav, SandboxAQ’s quantum-sensing device, on a large-scale test, flying with it for more than 150 hours across the continental U.S. on a general aviation aircraft that Acubed calls its “flight lab.”
MagNav uses quantum physics to measure the unique magnetic signatures at various points in the Earth’s crust. An AI algorithm matches those signatures to an exact location. During the test, Acubed found it could be a promising alternative to GPS in its ability to determine the plane’s location throughout the flights.
“The hard part was proving that the technology could work,” said SandboxAQ Chief Executive Jack Hidary, adding that more testing and certifications will be required before the technology makes it out of the testing phase. SandboxAQ will target defense customers first but then also commercial flights, as a rise in GPS tampering makes the need for a backup navigation system on flights more urgent.
[…]
The quantum sensing device is completely analog, making it essentially unjammable and unspoofable, SandboxAQ’s Hidary said. Unlike GPS, it doesn’t rely on any digital signals that are vulnerable to hacking. The information it provides is generated entirely from the device on board, and leverages magnetic signatures from the Earth, which cannot be faked, he said.
Quantum sensing will likely not replace all the applications of traditional GPS, but it can be a reliable backup and help pilots actually know when GPS is being spoofed, Hidary said.
How it works
Inside SandboxAQ’s device, essentially a small black box, a laser fires a photon at an electron, forcing it to absorb that photon. When the laser turns off, that electron goes back to its ground state, and releases the photon. As the photon is released, it gives off a unique signature based on the strength of the Earth’s magnetic field at that particular location.
Every square meter of the world has a unique magnetic signature based on the specific way charged iron particles in the Earth’s molten core magnetize the minerals in its crust. SandboxAQ’s device tracks that signature, feeds it into an AI algorithm that runs on a single GPU, compares the signature to existing magnetic signature maps, and returns an exact location.

The flight paths used in the tests of SandboxAQ’s quantum-sensing device, MagNav.

The Federal Aviation Administration requires that while planes are en route they must be able to pinpoint their exact location within 2 nautical miles (slightly more than 2 miles). During Acubed’s testing, it found that MagNav could pinpoint location within 2 nautical miles 100% of the time, and could even pinpoint location within 550 meters, or a bit more than a quarter of a nautical mile, 64% of the time.
“It’s the first novel absolute navigation system to our knowledge in the last 50 years,” Hidary said.
What else can quantum sensing do?
EY’s Global Chief Innovation Officer Joe Depa said the applications for quantum sensing go beyond aerospace. In defense, they can also be used to detect hidden submarines and tunnels.
And in healthcare, they can even detect faint magnetic signals from the brain or heart, theoretically allowing for better diagnosis of neurological and cardiac conditions without invasive procedures.
While the technology has been in the lab for decades, we are starting to see more examples of quantum sensing entering the real world, Depa said.
Some analysts estimate the quantum-sensing market could reach between $1 billion and $6 billion by 2040, he said.

Source: Exclusive | The Secret to Better Airplane Navigation Could Be Inside the Earth’s Crust – WSJ

Synology starts selling overpriced underperforming 1.6 TB SSDs for $535 — self-branded, archaic PCIe 3.0 SSDs the only option to meet ‘certified’ criteria being enforced on newer NAS models

Synology has begun selling its newest SNV5400 enterprise NAS SSDs, and the asking prices for what you receive are nothing short of shocking. For a 1.6 TB NVMe SSD at PCIe Gen3 speeds, Synology is asking $535 on B&H Photo Video, while many competing devices retail for around $100. The new SNV5400 family, which also includes 400GB and 800GB models, is one of only a few Synology-branded SSD families compatible with certain Synology NAS models due to the company’s new restrictive compatibility requirements.

Synology recently announced its plans to require the use of approved SSDs for certain NAS systems. To date, only Synology-branded SSDs have received the stamp of approval from the company. While previous SSD releases from Synology have remained marginally in line with market rates for SSDs, the SNV5400 family significantly exceeds the comparative pricing of the market.

Synology’s newest drives, which were first seen online at a gobsmacking €620 from one Newegg shop, are priced comfortably above any other similar models in the industry

[…]

The unfortunate thing about the Synology SNV5400 family is that it feels like it arrived several years too late. PCIe 3.0 has largely been left behind, as most storage manufacturers are now transitioning to PCIe 5.0, leaving PCIe 4.0 also in the dust. What’s more, the SNV5420’s endurance is vastly outclassed by its competitors; Western Digital’s WD Red SN700 SSD, another PCIe 3.0 NAS drive, advertises a TBW of 5100TB, nearly double what Synology offers.

[…]

While some loopholes exist for using non-approved drives in newer Synology NAS units (like this one written in German), eventually Synology customers may be forced to pay the hefty Synology tax for their off-the-shelf NAS solutions. Perhaps independent testing reveals some fairy dust in the new units that deserves its hefty upcharge, but we haven’t found any from Synology’s own site just yet.

Source: Synology starts selling overpriced 1.6 TB SSDs for $535 — self-branded, archaic PCIe 3.0 SSDs the only option to meet ‘certified’ criteria | Tom’s Hardware

HDMI 2.2 is here with new ‘Ultra96’ Cables — up to 16K resolution, higher maximum 96 Gbps bandwidth than DisplayPort, backwards compatibility & more

The HDMI Forum has officially finalized HDMI 2.2, the next generation of the video standard, rolling out to devices throughout the rest of this year. We already saw a bunch of key announcements at CES in January, but now that the full spec is here, it’s confirmed that HDMI 2.2 will eclipse DisplayPort in maximum bandwidth support thanks to the new Ultra96 cables.

What the heck is an “Ultra96” cable?

The key improvement with HDMI 2.2 over its predecessor, HDMI 2.1, is the bump in bandwidth from 48 GB/s to 96 GB/s. In order to ensure a consistent experience across all HDMI 2.2 devices, you’ll be seeing new HDMI cables with an “Ultra96” label denoting the aforementioned transfer rate capability. These cables will be certified by the HDMI Forum with clear branding that should make them easy to identify.

HDMI 2.2 Bandwidth

(Image credit: HDMI Forum)

This new bandwidth unlocks 16K resolution support at 60 Hz and 12K at 120 Hz, but with chroma subsampling. That being said, you can expect 4K 240 Hz at up to 12-bit color depth without any compression. DisplayPort 2.1b UHBR 20 was the first to do this with some monitors already available on the market, but that standard is limited to only 80 GB/s and HDMI 2.2 edges it by just a bit, which allows for even uncompressed 8K at 60 Hz.

It’s important to keep in mind that only cables explicitly labeled Ultra96 can allow for all this video goodness. As always, the HDMI Forum will allow manufacturers to make the claim that their devices are HDMI 2.2 compliant, but without actually enforcing the bandwidth rule. Therefore, it’s important to look for the Ultra96 label so you know you’re getting the real deal.

How to identify an Ultra96 HDMI cable

(Image credit: HDMI Forum)

Thankfully, though, if you don’t care about the super high resolutions or frame rates, HDMI 2.2 will be backwards compatible. That means you can use the new cables with older ports (or new ports with older cables) and get the lowest common denominator experience.

[…]

Apart from backwards compatibility, HDMI 2.2 will bring another comfort feature called “Latency Indication Protocol” (LIP) that will help with syncing audio and video together. This only really matters for large, complicated home theater setups incorporating a lot of speaker channels with receivers and projectors (or screens). If you’re part of the crowd, expect reduced lip-sync issues across the board.

[…]

Source: HDMI 2.2 is here with new ‘Ultra96’ Cables — up to 16K resolution, higher maximum 96 Gbps bandwidth than DisplayPort, backwards compatibility & more | Tom’s Hardware

First successful demonstration of quantum error correction of qudits for quantum computers

In the world of quantum computing, the Hilbert space dimension—the measure of the number of quantum states that a quantum computer can access—is a prized possession. Having a larger Hilbert space allows for more complex quantum operations and plays a crucial role in enabling quantum error correction (QEC), essential for protecting quantum information from noise and errors.

A recent study by researchers from Yale University published in Nature created qudits—a that holds and can exist in more than two states. Using a qutrit (3-level quantum system) and a ququart (4-level quantum system), the researchers demonstrated the first-ever experimental for higher-dimensional quantum units using the Gottesman–Kitaev–Preskill (GKP) bosonic code.

Most quantum computers on the market usually process information using quantum states called qubits—fundamental units similar to a bit in a regular computer that can exist in two well-defined states, up (1) and down (0) and also both 0 and 1 at the same time, due to quantum superposition. The Hilbert space of a single qubit is a two-dimensional complex vector space.

Since bigger is better, in the case of Hilbert space, the use of qudits instead of qubits is gaining a lot of scientific interest.

Qudits could make demanding tasks such as building quantum gates, running algorithms, creating special “magic” states, and simulating complex quantum systems easier than ever. To harness these powers, researchers have spent years building qudit-based quantum computers with the help of photons, ultracold atoms and molecules and superconducting circuits.

Stabilizing GKP qudits. Credit: Nature (2025). DOI: https://doi.org/10.1038/s41586-025-08899-y

The reliability of quantum computing is heavily dependent on QEC, which safeguards fragile quantum information from noise and imperfections. Yet, most experimental efforts in QEC are focused exclusively on qubits, and so qudits took a backseat.

The researchers on this study presented the first ever experimental demonstration of error correction for a qutrit and a ququart, using the Gottesman–Kitaev–Preskill (GKP) bosonic code. To optimize the systems as ternary and quaternary quantum memories, the researchers opted for a reinforcement learning algorithm, a type of machine learning that utilizes a trial and error method to find the best way to correct errors or operate quantum gates.

The experiment pushed past the break-even point for error correction, showcasing a more practical and hardware-efficient method for QEC by harnessing the power of a larger Hilbert space.

The researchers note that the increased photon loss and dephasing rates of GKP qudit states can lead to a modest reduction in the lifetime of the quantum information encoded in logical qudits, but in return, it provides access to more logical quantum states in a single physical system.

The findings demonstrate the promise of realizing robust and scalable quantum computers and could lead to breakthroughs in cryptography, materials science, and drug discovery.

More information: Benjamin L. Brock et al, Quantum error correction of qudits beyond break-even, Nature (2025). DOI: 10.1038/s41586-025-08899-y

Source: First successful demonstration of quantum error correction of qudits for quantum computers

Volvo EX90’s Lidar Sensor Will Fry Your Phone’s Camera

[…] That pod on the roof of Volvo’s new electric SUV is essentially just shooting out a bunch of high-powered infrared beams, determining the distance of the vehicle’s surroundings by measuring the time taken for reflected light to return to the sensor. If you point your phone’s camera directly at those beams, you’ll observe some strange phenomena, like what’s happening in the image above. What you’re seeing is a laser frying pixels on one of the device’s image sensors.

Credit to Reddit user Jeguetelli, who broke their smartphone for science, so the rest of us know what not to do. The constellation of artifacts disappears because the person filming zooms out, prompting the phone to switch to a shorter lens backed by a separate, healthy image sensor. To assuage the fears of some concerned redditors, this is presumably why lidar doesn’t pose the same threat to backup cameras on other vehicles, which also typically use ultra-wide-angle lenses.

Never film the new Ex90 because you will break your cell camera.Lidar lasers burn your camera.
byu/Jeguetelli inVolvo

It should be said that the risk here is inherent to lidar technology, and has nothing to do with Volvo’s specific implementation on the EX90. In fact, earlier this year, the automaker even issued a warning against directing external cameras at the vehicle’s lidar pod for the very reasons discussed. “Do not point a camera directly at the lidar,” one support page admonishes in no uncertain terms. Unfortunately, while that sort of information might be clear to owners (the ones who crack open their vehicles’ manuals, anyway), this is something the entire public ought to be aware of, especially as semi-autonomous cars with lidar systems become more common on our streets.

The Drive reached out to Volvo for a little more insight into the issue, as well as any other recommendations. “It’s generally advised to avoid pointing a camera directly at a lidar sensor,” a representative responded over email. “The laser light emitted by the lidar can potentially damage the camera’s sensor or affect its performance.”

[…]

Source: Volvo EX90’s Lidar Sensor Will Fry Your Phone’s Camera

Which kind of makes you wonder, what else does this LIDAR fry?

Army Will Seek Right to Repair Clauses in All Its Contracts

A new memo from Secretary of Defense Pete Hegseth is calling on defense contractors to grant the Army the right-to-repair. The Wednesday memo is a document about “Army Transformation and Acquisition Reform” that is largely vague but highlights the very real problems with IP constraints that have made it harder for the military to repair damaged equipment.

Hegseth made this clear at the bottom of the memo in a subsection about reform and budget optimization. “The Secretary of the Army shall…identify and propose contract modifications for right to repair provisions where intellectual property constraints limit the Army’s ability to conduct maintenance and access the appropriate maintenance tools, software, and technical data—while preserving the intellectual capital of American industry,” it says. “Seek to include right to repair provisions in all existing contracts and also ensure these provisions are included in all new contracts.”

[…]

appliance manufacturers and tractor companies have lobbied against bills that would make it easier for the military to repair its equipment.

This has been a huge problem for decades. In the 1990s, the Air Force bought Northrop Grumman’s B-2 Stealth Bombers for about $2 billion each. When the Air Force signed the contract for the machines, it paid $2.6 billion up front just for spare parts. Now, for some reason, Northrop Grumman isn’t able to supply replacement parts anymore. To fix the aging bombers, the military has had to reverse engineer parts and do repairs themselves.

Similarly, Boeing screwed over the DoD on replacement parts for the C-17 military transport aircraft to the tune of at least $1 million. The most egregious example was a common soap dispenser. “One of the 12 spare parts included a lavatory soap dispenser where the Air Force paid more than 80 times the commercially available cost or a 7,943 percent markup,” a Pentagon investigation found. Imagine if they’d just used a 3D printer to churn out the part it needed.

[…]

Source: Army Will Seek Right to Repair Clauses in All Its Contracts

Synology confirms that higher-end NAS products will require its branded drives

Popular NAS-maker Synology has confirmed and slightly clarified a policy that appeared on its German website earlier this week: Its “Plus” tier of devices, starting with the 2025 series, will require Synology-branded hard drives for full compatibility, at least at first.

“Synology-branded drives will be needed for use in the newly announced Plus series, with plans to update the Product Compatibility List as additional drives can be thoroughly vetted in Synology systems,” a Synology representative told Ars by email. “Extensive internal testing has shown that drives that follow a rigorous validation process when paired with Synology systems are at less risk of drive failure and ongoing compatibility issues.”

Without a Synology-branded or approved drive in a device that requires it, NAS devices could fail to create storage pools and lose volume-wide deduplication and lifespan analysis, Synology’s German press release stated. Similar drive restrictions are already in place for XS Plus and rack-mounted Synology models, though work-arounds exist.

[…]

Synology does not manufacture its own drives but packages and markets drives from major manufacturers, including Toshiba and Seagate. As such, Synology’s drives are typically more expensive than third-party models with similar specs. An 8TB 3.5-inch HDD from Synology’s Plus line, the HAT3310, costs $210 on Synology’s web store. One of the original drives the HAT3310 is reportedly sourced from, the Toshiba N300, can be found for $173 at more than one vendor. That number changes as you move up and down in capacity or move to “Enterprise” levels—and, of course, as you multiply it across large arrays.

[…]

Source: Synology confirms that higher-end NAS products will require its branded drives – Ars Technica

And a lot of people, who are already pissed off with Synology for old software and removing HEIC and mp4 support will be leaving the brand.

Source: https://www.reddit.com/r/synology/comments/1k3o1u6/the_results_are_in/

 

Printers start randomly speaking in tongues after Windows 11 update

Has your printer suddenly started spouting gibberish? A faulty Windows 11 23H2 update from Microsoft – rather than a ghost in the machine – could be the cause.

The update in question is KB5050092, a preview released at the end of January.

There were several known issues with this update, including problems with some Citrix software, but making USB printers speak in tongues is a new one.

According to Microsoft, the glitch can affect USB-connected dual-mode printers that support both USB Print and IPP (Internet Printing Protocol) over USB protocols.

Microsoft said: “You might observe that the printer unexpectedly prints random text and data, including network commands and unusual characters. As a result of this issue, the printed text often starts with the header ‘POST /ipp/print HTTP/1.1’ followed by other IPP (Internet Printing Protocol) related headers.”

It’s a peek behind the curtains of how printing protocols and drivers work that manufacturers might prefer users not to see.

“This issue tends to occur more often when the printer is either powered on or reconnected to the device after being disconnected,” Microsoft added.

The problem happens when the printer driver is installed on the user’s Windows device. The print spooler mistakenly sends some IPP protocol messages to the printer, which are then printed as unexpected text.

Considering how much printer consumables cost nowadays, and the antipathy some major printer makers feel toward both customers and third-party consumable manufacturers, users understandably don’t want to waste precious ink or toner by printing nonsense.

Microsoft said: “This issue is mitigated using Known Issue Rollback (KIR).” IT administrators can also use a special Group Policy to deploy a KIR.

As for a longer-term fix, Microsoft said: “We are working on a final resolution that will be part of a future Windows update.”

Source: Printers start speaking in tongues after Windows 11 update • The Register

Firmware update bricks HP printers, makes them unable to use HP cartridges

HP, along with other printer brands, is infamous for issuing firmware updates that brick already-purchased printers that have tried to use third-party ink. In a new form of frustration, HP is now being accused of issuing a firmware update that broke customers’ laser printers—even though the devices are loaded with HP-brand toner.

The firmware update in question is version 20250209, which HP issued on March 4 for its LaserJet MFP M232-M237 models. Per HP, the update includes “security updates,” a “regulatory requirement update,” “general improvements and bug fixes,” and fixes for IPP Everywhere. Looking back to older updates’ fixes and changes, which the new update includes, doesn’t reveal anything out of the ordinary. The older updates mention things like “fixed print quality to ensure borders are not cropped for certain document types,” and “improved firmware update and cartridge rejection experiences.” But there’s no mention of changes to how the printers use or read toner.

However, users have been reporting sudden problems using HP-brand toner in their M232–M237 series printers since their devices updated to 20250209. Users on HP’s support forum say they see Error Code 11 and the hardware’s toner light flashing when trying to print. Some said they’ve cleaned the contacts and reinstalled their toner but still can’t print.

“Insanely frustrating because it’s my small business printer and just stopped working out of nowhere[,] and I even replaced the tone[r,] which was a $60 expense,” a forum user wrote on March 8.

When reached for comment, an HP spokesperson said:

We are aware of a firmware issue affecting a limited number of HP LaserJet 200 Series devices and our team is actively working on a solution. For assistance, affected customers can contact our support team at: https://support.hp.com.

HP users have been burned by printer updates before

HP hasn’t clarified how widespread the reported problems are. But this isn’t the first time that HP broke its customers’ printers with an update. In May 2023, for example, a firmware update caused several HP OfficeJet brand printers to stop printing and show a blue screen for weeks.

With such bad experiences with printer updates and HP’s controversial stance on purposely breaking HP printer functionality when using non-HP ink, some have minimal patience for malfunctioning HP printers. As one forum commenter wrote:

… this is just a bad look for HP all around. We’re just the ones that noticed it and know how to post on a forum. Imagine how many 1,000s of other users are being affected by this and just think their printer broke.

[…]

Source: Firmware update bricks HP printers, makes them unable to use HP cartridges – Ars Technica

World’s first “Synthetic Biological Intelligence” computer runs on living human cells

The world’s first “biological computer” that fuses human brain cells with silicon hardware to form fluid neural networks has been commercially launched, ushering in a new age of AI technology. The CL1, from Australian company Cortical Labs, offers a whole new kind of computing intelligence – one that’s more dynamic, sustainable and energy efficient than any AI that currently exists – and we will start to see its potential when it’s in users’ hands in the coming months.

Known as a Synthetic Biological Intelligence (SBI), Cortical’s CL1 system was officially launched in Barcelona on March 2, 2025, and is expected to be a game-changer for science and medical research. The human-cell neural networks that form on the silicon “chip” are essentially an ever-evolving organic computer, and the engineers behind it say it learns so quickly and flexibly that it completely outpaces the silicon-based AI chips used to train existing large language models (LLMs) like ChatGPT.

“Today is the culmination of a vision that has powered Cortical Labs for almost six years,” said Cortical founder and CEO Dr Hon Weng Chong. “We’ve enjoyed a series of critical breakthroughs in recent years, most notably our research in the journal Neuron, through which cultures were embedded in a simulated game-world, and were provided with electrophysiological stimulation and recording to mimic the arcade game Pong. However, our long-term mission has been to democratize this technology, making it accessible to researchers without specialized hardware and software. The CL1 is the realization of that mission.”

The CL-1: a large housing contains all the life support systems required for the survival of the human brain cells that power the chip
The CL-1: a large housing contains all the life support systems required for the survival of the human brain cells that power the chip
Cortical Labs

He added that while this is a groundbreaking step forward, the full extent of the SBI system won’t be seen until it’s in users’ hands.

“We’re offering ‘Wetware-as-a-Service’ (WaaS),” he added – customers will be able to buy the CL-1 biocomputer outright, or simply buy time on the chips, accessing them remotely to work with the cultured cell technology via the cloud. “This platform will enable the millions of researchers, innovators and big-thinkers around the world to turn the CL1’s potential into tangible, real-word impact. We’ll provide the platform and support for them to invest in R&D and drive new breakthroughs and research.”

These remarkable brain-cell biocomputers could revolutionize everything from drug discovery and clinical testing to how robotic “intelligence” is built, allowing unlimited personalization depending on need. The CL1, which will be widely available in the second half of 2025, is an enormous achievement for Cortical – and as New Atlas saw recently with a visit to the company’s Melbourne headquarters – the potential here is much more far-reaching than Pong.

The team made international headlines in 2022 after developing a self-adapting computer ‘brain’ by placing 800,000 human and mouse neurons on a chip and training this network to play the video game. New Atlas readers may already be familiar with Cortical Labs and its formative steps towards SBI, with Loz Blain covering the early advances of this self-adjusting neural network capable of adjusting and adapting to forge new, stimuli-responsive pathways in processing information.

“We almost view it actually as a kind of different form of life to let’s say, animal or human,” Chief Scientific Officer Brett Kagan told Blain in 2023. “We think of it as a mechanical and engineering approach to intelligence. We’re using the substrate of intelligence, which is biological neurons, but we’re assembling them in a new way.”

Cortical Labs has come a long way since that important first step but now-obsolete DishBrain, both in technology and name. Now, with the commercialization of the CL1, researchers can get hands-on with the the technology, and start exploring a vast range of real-world applications.

When New Atlas visited Kagan and team at Cortical Labs’ Melbourne headquarters late last year in the lead-up to this launch, we saw first-hand how far the biotechnology has come since the DishBrain. The CL1 features relatively simple, stable hardware, new ways of optimizing “wetware” – human brain cells – and significant strides towards being able to grow a neural network that works like a fully functional brain. Or, as Kagan explained of a work in progress, the “Minimal Viable Brain.”

In the lab, the early CL1 model is put through its paces as the team monitors its response to stimuli (prompts)
In the lab, the early CL1 model is put through its paces as the team monitors its response to stimuli (prompts)
New Atlas

In 2022, the team demonstrated how rodent- and human-induced pluripotent stem cells (hiPSCs) integrated into high-density multielectrode arrays (HD-MEAs) based on complementary metal–oxide–semiconductor (CMOS) technology could be electro-physiologically stimulated to forge autonomous, highly efficient information-exchange paths.

To do so, they needed a way to reward the brain cells when they exhibited desired behaviors, and punish them when they failed a task. In the DishBrain experiments, they proved that predictability was the key; neurons seek out connections that produce energy-efficient, predictable outcomes and will adapt their networks in search of that reward, while avoiding behaviours that produce a random, chaotic electrical signal.

But, as Kagan explained, that was just the start.

“The current version is totally different technology,” Kagan told Blain and I. “The previous one used something called a CMOS chip, which basically gave you a really high-density read, but it was opaque, you couldn’t see the cells. And there were other issues as well – like, when you stimulate with a CMOS chip, you can’t draw out the charge; you can’t balance the charge as well. You end up with a build-up of charge at where you’re stimulating over long periods of time, and that’s pretty bad for the cells.

“With these versions, they’re a much simpler technology, but that means they’re much more stable and you’re much more able to actively balance that charge,” he added. “When you put in two microamps of current, you can draw out 2 microamps of current. And you can keep it more stable for longer.”

Chief Scientific Officer Brett Kagan assesses some stem cells cultivated in the lab
Chief Scientific Officer Brett Kagan assesses some stem cells cultivated in the lab
New Atlas

Inside the CL1 system, lab-grown neurons are placed on a planar electrode array – or, as Kagan explained, “basically just metal and glass.” Here, 59 electrodes form the basis of a more stable network, offering the user a high degree of control in activating the neural network. This SBI “brain” is then placed in a rectangular life-support unit, which is then connected to a software-based system to be operated in real time.

“A simple way to describe it would be like a body in a box, but it has filtration for waves, it has where the media is stored, it has pumps to keep everything circulating, gas mixing, and of course temperature control,” Kagan explained.

In the lab, Cortical is assembling these units to construct a first-of-its-kind biological neural network server stack, housing 30 individual units that each contain the cells on their electrode array, which is expected to go online in the coming months.

The team aims to have four such stacks running and available for commercial use through a cloud system before the end of the year. The units themselves are expected to have a price tag of around US$35,000, to start with (anything close to this kind of tech is currently priced at €80,000, or nearly US$85,000).

An entire rack of CL1 units uses only around 850-1,000 W of energy, is fully programmable and offers “bi-directional stimulation and read interface, tailored to enable neural communication and network learning,” the team noted in their launch release. Incredibly, the CL1 unit doesn’t require an external computer to operate, either.

Kagan and team testing the CL1 units, which are built to maintain the health of the cells living on the silicon hardware
Kagan and team testing the CL1 units, which are built to maintain the health of the cells living on the silicon hardware
New Atlas

The complex, ever-evolving SBI neural networks – which, under a microscope, can be seen forming branches from electrode to electrode – have, to start with, the potential to revolutionize how drug discovery and disease modeling is researched.

“We’re aiming to be significantly more affordable, and we do want to bring that pricing down in the long-term, but that’s the much longer term,” Kagan said. “In the meantime, we provide access to people from anywhere, anyone, any house, through the cloud-based system.

“So even if you don’t have one of these [units],” he added, “you can access one of these from your home.”

Taking us through the Physical Containment Level, or PC2, laboratory – a mix of computer hardware and more traditional biological specimens and equipment – Kagan showed us some of the all-important induced pluripotent stem cells (IPSC) under the microscope. IPSCs, cultivated in the lab from blood samples, are essentially blank slates that can grow into different types of cells.

“What we do is take those, and we start to use two different methods to differentiate them,” he explained. “One, we can either apply small molecules, which is called an ontogenetic differentiation protocol, where we essentially try to mimic the molecules that happen in utero or, rather, in the foetus’ developing brain. The other method is where we directly differentiate them, where we choose to up-regulate specific genes that are involved in neurons.”

One of the team’s methods is quick and produces a high level of cellular purity, however, the downside is that it isn’t exactly representative of the human brain.

“The brain is not a high-purity organ; it has a lot of different cell types, a lot of different connections,” Kagan said. “So if you only have one cell type, you might have that cell type, but you don’t have a brain.”

Just one section of the CL1 stack, with each unit housing living cells
Just one section of the CL1 stack, with each unit housing living cells
Cortical Labs

The second method, “the small molecule approach,” produces diverse populations of cells, but it’s often unclear as to exactly what they’re working with. And understanding this is critical to Cortical’s ambitious ongoing pursuit of building the Minimal Viable Brain. While the CL1 launch is the first step, the team is also hard at work on the next stage of SBI.

“You can categorize the main cells, but there’s always a lot of sub-cell types – and that’s really good, as we’ve found out, but we’d really like to have fully controlled direct differentiation,” he explains. “We just haven’t resolved that problem yet: What is the ‘Minimal Viable Brain?’”

The MVB is an intriguing concept: How to bioengineer a human-like “brain” with the least amount of superfluous cell differentiation, but one that would have the complexity that growing a neural network made up of homogenous cell types doesn’t have. This kind of tool would be a powerful model, allowing for even more control and nuanced analyses than what is currently possible in research conducted on a real brain.

“It would basically be the key biological components that allow something to process information in a dynamic and responsive way, according to underlying principles,” Kagan explained. “A single neuron can do a lot of stuff, and while it can respond to some degree of dynamic behavior, it can’t, for example, navigate an environment. The smallest working brains we know of have 301 or 302 – depending on who you ask – neurons, and that’s in the C. elegens. But each of those neurons are really highly specified.

Actual human brain cells, living on a silicon chip among an array of input/output electrodes
Actual human brain cells, living on a silicon chip among an array of input/output electrodes
Cortical Labs

“And another question is: Is the C. elegens brain the minimal viable brain? Do you need all of those neurons or could you achieve it with, you know, 30 neurons that are all uniquely circuited up?” he continued. (The organism is, of course, the science world’s favorite nematode, Caenorhabditis elegans.) “And if that’s the case, can you build a more complex network of those with 100,000 of the same 30? We don’t know the answer to any of this yet, but with this technology we can uncover it.

“We’re starting to add more and more cell types to this culture as we go, but one thing that’s holding us back is the tools,” he said. “The [CL1] unit didn’t exist until we built it, and you need a tool like that to answer questions like, ‘What is the minimal viable brain?'” If you have 120 units, you can set up really well-controlled experiments to understand exactly what drives the appearance of intelligence. You can break things down to the transcriptomic and genetic level to understand what genes and what proteins is actually driving one to learn and another not to learn. And when you have all those units, you can immediately start to take the drug discovery and disease modeling approach.”

This is particularly important for research into better treatments or even cures for conditions such as epilepsy and Alzheimer’s disease, and other brain-related illnesses. In the meantime, the CL1 system, is expected to advance research into diseases and therapeutics considerably.

“The large majority of drugs for neurological and psychiatric diseases that enter clinical trial testing fail, because there’s so much more nuance when it comes to the brain – but you can actually see that nuance when you test with these tools,” he explained. “Our hope is that we’re able to replace significant areas of animal testing with this. Animal testing is unfortunately still necessary, but I think there are a lot of cases where it can be replaced and that’s an ethically good thing.”

The ethics of this technology has been front and center for Cortical – that breakthrough 2022 paper sparked plenty of debate around it, particularly in the area of human “consciousness” and “sentience.” However, guardrails are in place, as much as they can be, for the ethical use of the CL1 units and the remote WaaS access.

The cells form an entirely new kind of artificial intelligence
The cells form an entirely new kind of artificial intelligence
New Atlas

“There are numerous regulatory approvals required, based on location and specific use cases,” the team noted in its launch statement. “Regulatory bodies may include health agencies, bioethics committees, and governmental organisations overseeing biotechnology or medical devices. Compliance with these regulations is essential to ensure responsible and ethical use of biological computing technologies.”

But as a global frontrunner in this ambitious technology, Cortical knows that – much like the rapid advancement of non-biological AI – it’s not easy to predict the broad applications of SBI. And one other challenge the company faces is funding – something that the realization of CL1 as a tangible, usable technology might change.

“The difficulty I keep hearing [from investors] is that we don’t fit into a box,” Kagan told us, as we took off our lab coats, hair nets and masks, and relocated to a couch by the computer room upstairs. “And we don’t – we’re a technology that crosses a number of different boundaries. If you look at the priority sectors, we can cover everything from the enabling capabilities of biotechnology, robotics, medical science, and a range of other things. We’re not quite AI, we’re not quite medicine – we can do both AI and medicine, but we’re not either. So we often get excluded.”

The complex life-support system inside each CL1 unit
The complex life-support system inside each CL1 unit
New Atlas

As such, the launch of the physical CL1 system and the Cortical Cloud for WaaS remote use is a huge achievement, with Kagan and team excited to see where SBI can go once its in people’s hands.

“The CL1 is the first commercialized biological computer, uniquely designed to optimize communication and information processing with in vitro neural cultures,” the team noted. “The CL1, with built-in life support to maintain the health of the cells, holds significant possibilities in the fields of medical science and technology.

“SBI is inherently more natural than AI, as it utilizes the same biological material – neurons – that underpin intelligence in living organisms,” Cortical added. “By leveraging neurons as a computational substrate, SBI has the potential to create systems that exhibit more organic and natural forms of intelligence compared to traditional silicon-based AI.”

Source: Cortical Labs

Source: World’s first “Synthetic Biological Intelligence” runs on living human cells

Lenovo has a convertable T series laptop – with mouse dot

[…] The ThinkPad T14s 2-in-1 is by far the most interesting of the bunch, with a new convertible body that’s similar to Lenovo’s Yoga laptops, and supports the magnetic Yoga Pen stylus. The laptop comes with up to a 14-inch, 400-nit WUXGA touch display, and inside, you can get up to a Intel Core Ultra 7 H or U 200 series chip, 64GB of LPDDR5x RAM and 1TB of storage. If you’re looking for an option without a 360-degree hinge, the ThinkPad T14s Gen 6 and ThinkPad T14 Gen 6 will also now come with either Intel Core Ultra or AMD Ryzen AI Pro chips, up to 32GB of RAM and up to 2TB of storage.

The lightweight ThinkPad X13 Gen 6.
Lenovo

Lenovo describes the new ThinkPad X13 Gen 6 as “one of the lightest ThinkPad designs ever,” at only 2.05 lbs, but that light weight doesn’t mean the laptop misses out on the latest internals. The X13 Gen 6 comes with either a Intel Core Ultra or AMD Ryzen AI Pro chip, up to 64GB of LPDDR5x RAM and your choice of a 41Wh or 54.7Wh battery. The new ThinkPad can also support Wi-Fi 7 and an optional 5G connection, if you want to take it on the go.

[…]

Source: Lenovo is updating its ThinkPad lineup with new chips and form factors at MWC 2025

The Lenovo Solar PC Concept feels like a device whose time has come

You might be surprised to learn that the first laptop with built-in solar panels is nearly 15 years old. But to me, the bigger shock is that with all the recent advancements in photovoltaic cells, manufacturers haven’t revisited this idea more often. But at MWC 2025, Lenovo is changing that with its Yoga Solar PC Concept.

Weighing 2.6 pounds and measuring less than 0.6 inches thick, the Yoga Solar PC Concept is essentially the same size as a standard 14-inch clamshell. And because its underlying design isn’t all that different from Lenovo’s standard Yoga family, it doesn’t skimp on specs either. It features an OLED display, up to 32GB of RAM, a decent-sized 50.2 WHr battery and even a 2MP IR webcam for use with Windows Hello.

However, all those components aren’t nearly as important as the solar cells embedded in its lid. Lenovo says the panels use Back Contact Cell technology so that its mounting brackets and gridlines can be placed on the rear of the cells. This allows the panels to offer up to 24 percent solar energy conversion, which is pretty good as that matches the efficiency you get from many high-end home solar systems. Furthermore, the PC also supports Dynamic Solar Tracking to automatically adjust the cells’ settings to maximize the amount of energy they can gather.

Lenovo says this means the Yoga Solar PC can generate enough juice to play an hour of videos after only 20 minutes in the sun. But what might be more impressive is that even when the laptop is indoors, it can still harvest power from as little as 0.3 watts of light to help top off its battery. Finally, to help you understand how much power it’s gathering, Lenovo created a bespoke app to track how much light the panels absorb.

Unfortunately, Lenovo doesn’t have any plans to turn this concept into a full commercial device

[…]

Source: The Lenovo Solar PC Concept feels like a device whose time has come

This Gesture Sensor Is Precise, Cheap, Well-Hidden

In today’s “futuristic tech you can get for $5”, [RealCorebb] shows us a gesture sensor, one of the sci-fi kind. He was doing a desktop clock build, and wanted to add gesture control to it – without any holes that a typical optical sensor needs. After some searching, he’s found Microchip’s MGC3130, a gesture sensing chip that works with “E-fields”, more precise than the usual ones, almost as cheap, and with a lovely twist.

The coolest part about this chip is that it needs no case openings. The 3130 can work even behind obstructions like a 3D-printed case. You do need a PCB the size of a laptop touchpad, however — unlike the optical sensors easy to find from the usual online marketplaces. Still, if you have a spot, this is a perfect gesture-sensing solution. [RealCorebb] shows it off to us in the demo video.

This PCB design is available as gerbers+bom+schematic PDF. You can still order one from the files in the repo.  Also, you need to use Microchip’s tools to program your preferred gestures into the chip. Still, it pays off, thanks to the chip’s reasonably low price and on-chip gesture processing. And, [RealCorebb] provides all the explanations you could need, has Arduino examples for us, links all the software, and even provides some Python scripts! Touch-sensitive technology has been getting more and more steam in hacker circles – for instance, check out this open-source 3D-printed trackpad.

 

Source: This Gesture Sensor Is Precise, Cheap, Well-Hidden

HP buys Humane’s AI pins, will brick them in 10 days. Like with their VR hardware, HP likes turning hardware into sustainable junk.

AI hardware startup Humane has given its users just ten (10!) days notice that their Pins will be disconnected. In a note to its customers, the company said AI Pins will “continue to function normally” until 12PM PT on February 28. On that date, users will lose access to essentially all of their device’s features, including but not limited to calling, messaging, AI queries and cloud access. The FAQ does note that you’ll still be able to check on your battery life, though.

Humane is encouraging its users to download any stored data before February 28, as it plans on permanently deleting “all remaining customer data” at the same time as switching its servers off.

[…]

Today’s discontinuation announcement was brought about by the acquisition of Humane by HP, which is buying the company’s intellectual property for $116 million but clearly has no interest in its current hardware business

[…]

Source: All of Humane’s AI pins will stop working in 10 days

Microcomb chips help pave the way for thousand times more accurate GPS systems

Today, our mobile phones, computers, and GPS systems can give us very accurate time indications and positioning thanks to the over 400 atomic clocks worldwide. All sorts of clocks — be it mechanical, atomic or a smartwatch — are made of two parts: an oscillator and a counter. The oscillator provides a periodic variation of some known frequency over time while the counter counts the number of cycles of the oscillator. Atomic clocks count the oscillations of vibrating atoms that switch between two energy states with very precise frequency.

Most atomic clocks use microwave frequencies to induce these energy oscillations in atoms. In recent years, researchers in the field have explored the possibility of using laser instead to induce oscillations optically. Just like a ruler with a great number of ticks per centimeter, optical atomic clocks make it possible to divide a second into even more time fractions, resulting in thousands of times more accurate time and position indications.

“Today’s atomic clocks enable GPS systems with a positional accuracy of a few meters. With an optical atomic clock, you may achieve a precision of just a few centimeters.

[…]

The core of the new technology, described in a recently published research article in Nature Photonics, are small, chip-based devices called microcombs. Like the teeth of a comb, microcombs can generate a spectrum of evenly distributed light frequencies.

“This allows one of the comb frequencies to be locked to a laser frequency that is in turn locked to the atomic clock oscillation,” says Minghao Qi.

[…]

the minimal size of the microcomb makes it possible to shrink the atomic clock system significantly while maintaining its extraordinary precision,”

[…]

Another major obstacle has been achieving simultaneously the “self-reference” needed for the stability of the overall system and aligning the microcomb’s frequencies exactly with the atomic clock’s signals.

“It turns out that one microcomb is not sufficient, and we managed to solve the problem by pairing two microcombs, whose comb spacings, i.e. frequency interval between adjacent teeth, are close but with a small offset, e.g. 20 GHz. This 20 GHz offset frequency will serve as the clock signal that is electronically detectable. In this way, we could get the system to transfer the exact time signal from an atomic clock to a more accessible radio frequency, ”

[…]

“Photonic integration technology makes it possible to integrate the optical components of optical atomic clocks, such as frequency combs, atomic sources and lasers, on tiny photonic chips in micrometer to millimeter sizes, significantly reducing the size and weight of the system,” says Dr. Kaiyi Wu.

The innovation could pave the way for mass production, making optical atomic clocks more affordable and accessible for a range of applications in society and science. The system that is required to “count” the cycles of an optical frequency requires many components besides the microcombs, such as modulators, detectors and optical amplifiers. This study solves an important problem and shows a new architecture, but the next steps are to bring all the elements necessary to create a full system on a chip.

[…]

Source: Microcomb chips help pave the way for thousand times more accurate GPS systems | ScienceDaily

Nvidia Drops Support for PhysX on Its RTX 50-Series Cards

Earlier this week, Nvidia confirmed in its official forums that “32-bit CUDA applications are deprecated on GeForce RTX 50 series GPUS.” The company’s support page for its “Support plan for 32-bit CUDA” notes that some 32-bit capabilities were removed from CUDA 12.0 but does not mention PhysX. Effectively, the 50 series cards cannot run any game with PhysX as developers originally intended. That’s ironic, considering Nvidia originally pushed this tech back in the early 2010s to sell its GTX range of GPUs.

PhysX is a GPU-accelerated physics system that allows for more realistic physics simulations in games without putting pressure on the CPU. This included small particle effects like fog or smoke and cloth movement.

[…]

a game like Batman: Arkham City […] with an Nvidia RTX 5070 Ti, and when you try to enable hardware-accelerated physics in settings, you’ll receive a note reading, “Your hardware does not support Nvidia Hardware Accelerated PhysX. Performance will be reduced without dedicated hardware.” [ …] The in-game benchmark shows that with the hardware accelerated physics setting enabled on the RTX 5070 Ti, I saw a hit of 65 average FPS compared to the setting off, from 164 to 99. The difference in ambiance without the setting enabled is striking.

[…]

In other games, like Borderlands 2, it simply grays out the PhysX option in settings. As one Reddit user found, you can force it through editing the game files, but that will result in horrible framerate drops even when shooting a gun at a wall. It’s not what the game makers intended. If you want to play these older games in their prime, your best option is to plug a separate, older GeForce GPU into the system and run 32-bit PhysX games exclusively on that card.

[…]

we see Nvidia deprecating its own hardware capabilities, hurting games that are little more than a decade old

[…]

Source: Nvidia Drops Support for PhysX on Its RTX 50-Series Cards

Pebble Founder Is Bringing the Smartwatch Back as Google Open-Sources Its Software

There’s some good news to share for Pebble fans: The no-frills smartwatch is making a comeback. The Verge spoke to Pebble founder Eric Migicovsky today, who says he was able to convince Google to open-source the smartwatch’s operating system. Migicovsky is in the early stages of prototyping a new watch and spinning up a company again under a to-be-announced new name.

Founded back in 2012, Pebble was initially funded on Kickstarter and created smartwatches with e-ink displays that nailed the basics. They could display notifications, let users control their music, and last 5-7 days on a charge thanks to their displays that are akin to what you find on a Kindle. The watches came in at affordable prices too, and they could work across both iOS and Android.

[…]

Fans of Pebble will be happy to know that whatever new smartwatch Migicovsky releases, it will be almost identical to what came before. “We’re building a spiritual, not successor, but clone of Pebble,” he says, “because there’s not that much I actually want to change.” Migicovsky plans to keep the software open-source and allow anyone to customize it for their watches. “There’s going to be the ability for anyone who wants to, to take Pebble source code, compile it, run it on their Pebbles, build new Pebbles, build new watches. They could even use it in random other hardware. Who knows what people can do with it now?”

And of course, this time around Migicovsky is using his own capital to grow the company in a sustainable way. After leaving Pebble, he started a messaging startup called Beeper, which was acquired by WordPress developer Automattic. Migicovsky has also served as an investor at Y-Combinator.

It is unclear when Migicovsky’s first watch may be available, but updates will be shared at rePebble.com.

Source: Pebble Founder Is Bringing the Smartwatch Back as Google Open-Sources Its Software

A new optical memory platform for super fast calculations

[…] photonics, which offers lower energy consumption and reduced latency than electronics.

One of the most promising approaches is in-memory computing, which requires the use of photonic memories. Passing light signals through these memories makes it possible to perform operations nearly instantaneously. But solutions proposed for creating such memories have faced challenges such as low switching speeds and limited programmability.

Now, an international team of researchers has developed a groundbreaking photonic platform to overcome those limitations. Their findings were published in the journal Nature Photonics.

[…]

The researchers used a magneto-optical material, cerium-substituted yttrium iron garnet (YIG), the optical properties of which dynamically change in response to external magnetic fields. By employing tiny magnets to store data and control the propagation of light within the material, they pioneered a new class of magneto-optical memories. The innovative platform leverages light to perform calculations at significantly higher speeds and with much greater efficiency than can be achieved using traditional electronics.

This new type of memory has switching speeds 100 times faster than those of state-of-the-art photonic integrated technology. They consume about one-tenth the power, and they can be reprogrammed multiple times to perform different tasks. While current state-of-the-art optical memories have a limited lifespan and can be written up to 1,000 times, the team demonstrated that magneto-optical memories can be rewritten more than 2.3 billion times, equating to a potentially unlimited lifespan.

[…]

Source: A new optical memory platform for super fast calculations | ScienceDaily

Robot arm developed that allows sense of touch

You can probably complete an amazing number of tasks with your hands without looking at them. But if you put on gloves that muffle your sense of touch, many of those simple tasks become frustrating. Take away proprioception — your ability to sense your body’s relative position and movement — and you might even end up breaking an object or injuring yourself.

[…]

Greenspon and his research collaborators recently published papers in Nature Biomedical Engineering and Science documenting major progress on a technology designed to address precisely this problem: direct, carefully timed electrical stimulation of the brain that can recreate tactile feedback to give nuanced “feeling” to prosthetic hands.

[…]

The researchers’ approach to prosthetic sensation involves placing tiny electrode arrays in the parts of the brain responsible for moving and feeling the hand. On one side, a participant can move a robotic arm by simply thinking about movement, and on the other side, sensors on that robotic limb can trigger pulses of electrical activity called intracortical microstimulation (ICMS) in the part of the brain dedicated to touch.

For about a decade, Greenspon explained, this stimulation of the touch center could only provide a simple sense of contact in different places on the hand.

“We could evoke the feeling that you were touching something, but it was mostly just an on/off signal, and often it was pretty weak and difficult to tell where on the hand contact occurred,” he said.

[…]

By delivering short pulses to individual electrodes in participants’ touch centers and having them report where and how strongly they felt each sensation, the researchers created detailed “maps” of brain areas that corresponded to specific parts of the hand. The testing revealed that when two closely spaced electrodes are stimulated together, participants feel a stronger, clearer touch, which can improve their ability to locate and gauge pressure on the correct part of the hand.

The researchers also conducted exhaustive tests to confirm that the same electrode consistently creates a sensation corresponding to a specific location.

“If I stimulate an electrode on day one and a participant feels it on their thumb, we can test that same electrode on day 100, day 1,000, even many years later, and they still feel it in roughly the same spot,” said Greenspon, who was the lead author on this paper.

[…]

The complementary Science paper went a step further to make artificial touch even more immersive and intuitive. The project was led by first author Giacomo Valle, PhD, a former postdoctoral fellow at UChicago who is now continuing his bionics research at Chalmers University of Technology in Sweden.

“Two electrodes next to each other in the brain don’t create sensations that ’tile’ the hand in neat little patches with one-to-one correspondence; instead, the sensory locations overlap,” explained Greenspon, who shared senior authorship of this paper with Bensmaia.

The researchers decided to test whether they could use this overlapping nature to create sensations that could let users feel the boundaries of an object or the motion of something sliding along their skin. After identifying pairs or clusters of electrodes whose “touch zones” overlapped, the scientists activated them in carefully orchestrated patterns to generate sensations that progressed across the sensory map.

Participants described feeling a gentle gliding touch passing smoothly over their fingers, despite the stimulus being delivered in small, discrete steps. The scientists attribute this result to the brain’s remarkable ability to stitch together sensory inputs and interpret them as coherent, moving experiences by “filling in” gaps in perception.

The approach of sequentially activating electrodes also significantly improved participants’ ability to distinguish complex tactile shapes and respond to changes in the objects they touched. They could sometimes identify letters of the alphabet electrically “traced” on their fingertips, and they could use a bionic arm to steady a steering wheel when it began to slip through the hand.

These advancements help move bionic feedback closer to the precise, complex, adaptive abilities of natural touch, paving the way for prosthetics that enable confident handling of everyday objects and responses to shifting stimuli.

[…]

“We hope to integrate the results of these two studies into our robotics systems, where we have already shown that even simple stimulation strategies can improve people’s abilities to control robotic arms with their brains,” said co-author Robert Gaunt, PhD, associate professor of physical medicine and rehabilitation and lead of the stimulation work at the University of Pittsburgh.

Greenspon emphasized that the motivation behind this work is to enhance independence and quality of life for people living with limb loss or paralysis.

[…]

Source: Fine-tuned brain-computer interface makes prosthetic limbs feel more real | ScienceDaily

Finally USB decides to use sane terminology to label cables

In 2019, the names used by the USB Implementor Forum’s engineering teams to describe the various speeds of USB got leaked, and the backlash (including our own) was harsh. Names like “USB 3.2 Gen 2” mean nothing to consumers — but neither do marketing-style terms, such as “SuperSpeed USB 10Gbps.”

It’s the latter speed-only designation that became the default standard, where users cared less about numerical gobbledygook and more about just how fast a cable was. (Our reviews simply refer to the port by its shape, such as USB-A, and its speed, such as 5Gbps.) In 2022, the USB world settled upon an updated logo scheme that basically cut out everything but the speed of the device or cable.

Thankfully, the USB-IF has taken the extra step and extended its logo scheme to the latest versions of the USB specification, including USB4. It also removes “USB4v2” from consumer branding.

USB-IF

If you’re buying a USB4 or USB4v2 docking station, you’ll simply see a “USB 80Gbps” or “USB 40Gbps” logo on the side of the box now. While it may be a little disconcerting to see a new logo like this, at least you’ll know exactly what you’re buying.

This is a welcome move on several fronts. For one, USB-C ports typically go unlabeled on PCs, so you can’t be sure whether the USB-C port is an older 10Gbps port or a more modern USB4 or Thunderbolt port. (Thunderbolt 4 and USB4v2 are essentially identical, though Intel has its own certification process. Thunderbolt ports aren’t identified by speed, either.) USB-IF representatives told me that they’d heard a rumor that Dell would begin identifying its ports like the primary image above.

Finally, the updated USB logos will also apply to cables. Jeff Ravencraft, president of the USB-IF, said that was done to clearly communicate the only things consumers cared about: what data speeds the cable supported and how much power it could pass between two devices.

Source: An updated USB logo will now mark the fastest docking stations | PCWorld

FPV Flying In Mixed Reality Is Easier Than You’d Think | Hackaday

Flying a first-person view (FPV) remote controlled aircraft with goggles is an immersive experience that makes you feel as if you’re really sitting in the cockpit of the plane or quadcopter. Unfortunately, while your wearing the goggles, you’re also completely blind to the world around you. That’s why you’re supposed to have a spotter nearby to keep watch on the local meatspace while you’re looping through the air.

But what if you could have the best of both worlds? What if your goggles not only allowed you to see the video stream from your craft’s FPV camera, but you could also see the world around you. That’s precisely the idea behind mixed reality goggles such as Apple Vision Pro and Meta’s Quest, you just need to put all the pieces together. In a recent video [Hoarder Sam] shows you exactly how to pull it off, and we have to say, the results look quite compelling.

 

[Sam]’s approach relies on the fact that there’s already cheap analog FPV receivers out there that act as a standard USB video device, with the idea being that they let you use your laptop, smartphone, or tablet as a monitor. But as the Meta Quest 3 is running a fork of Android, these devices are conveniently supported out of the box. The only thing you need to do other than plug them into the headset is head over to the software repository for the goggles and download a video player app.

The FPV receiver can literally be taped to the Meta Quest

With the receiver plugged in and the application running, you’re presented with a virtual display of your FPV feed hovering in front of you that can be moved around and resized. The trick is to get the size and placement of this virtual display down to the point where it doesn’t take up your entire field of vision, allowing you to see the FPV view and the actual aircraft at the same time. Of course, you don’t want to make it too small, or else flying might become difficult.

[Sam] says he didn’t realize just how comfortable this setup would be until he started flying around with it. Obviously being able to see your immediate surroundings is helpful, as it makes it much easier to talk to others and make sure nobody wanders into the flight area. But he says it’s also really nice when bringing your bird in for a landing, as you’ve got multiple viewpoints to work with.

Perhaps the best part of this whole thing is that anyone with a Meta Quest can do this right now. Just buy the appropriate receiver, stick it to your goggles, and go flying. If any readers give this a shot, we’d love to hear how it goes for you in the comments.

Source: FPV Flying In Mixed Reality Is Easier Than You’d Think | Hackaday