Researchers Trained People to Echolocate in Just 10 Weeks

Scientists in the UK say the same sort of echolocation practiced by bats may also help people living with blindness better navigate the world. In a new study, they found that blind and sighted participants who took part in a 10-week training program were able to learn how to perform echolocation, and the blind participants largely reported that it seemed to improve their mobility and ability to live independently afterward.

[…]

In this new research, published in PLOS One, Thaler and her team wanted to test if inexperienced people, both with and without sight, could be taught how to echolocate in a relatively short period of time and if this skill would then actually help people with blindness.

They recruited 14 sighted people and 12 people who became blind early in life for the experiment, which involved 20 training sessions conducted over 10 weeks. The volunteers were between the ages of 21 and 79, and none had regularly used echolocation in their lives beforehand (two of the blind individuals did have some experience, but everyone else had none). To validate their tests and set a benchmark, they also enlisted the help of seven people who had been practicing echolocation for at least a decade.

Overall, the team found that all of the individuals noticeably improved their performance on tests of echolocation over the 10-week period. These tests would involve situations like being able to recognize the relative location and size of nearby objects or being able to navigate through a natural environment outside of the lab without sight. These improvements didn’t seem to be influenced by the age or degree of blindness among participants. A few people even performed as well as expert echolocators on certain tasks, while some sighted people did better than some blind people.

Blind volunteers were also surveyed three months later about how the training may have affected their lives. They all reported experiencing improvements in their mobility as a result of the training, while 83% also reported feeling more independent. The findings, according to Thaler, suggest that this training can be easily adopted by many people—and that it can help blind people with everyday activities.

[…]

Source: Researchers Trained People to Echolocate in Just 10 Weeks

Simple Slide Coating Gives a Boost to the Resolution of a Microscope

A light-powered microscope has a resolution limit of around 200 nanometers—which makes observing specimens smaller or closer together than that all but impossible. Engineers at the University of California San Diego have found a clever way to improve the resolution of a conventional microscope, but surprisingly it involves no upgrades to the lenses or optics inside it.

According to the Rayleigh Criterion theory, proposed by John William Strutt, 3rd Baron Rayleigh, back in 1896, a traditional light-based microscope’s resolution is limited by not only the optics capabilities of glass lenses but the nature of light itself, as a result of diffraction that occurs when light rays are bent. The limitation means that an observer looking through the microscope at two objects that are closer than 200 nanometers apart will perceive them as a single object.

Electron microscopes, by comparison, blast a sample with a highly focused beam of electrons instead of visible light, and can instead achieve resolutions of less than a single nanometer. There’s a trade-off, however, as samples being observed through an electron microscope need to be placed inside a vacuum chamber which has the unfortunate downside of killing living things, so observing cells and other living phenomena in action isn’t possible. To date, there hasn’t been an in-between option, but it sounds like that’s exactly what these engineers have created.

“Artistic rendering of the new super resolution microscopy technology. Animal cells (red) are mounted on a slide coated with the multilayer hyperbolic metamaterial. Nanoscale structured light (blue) is generated by the metamaterial and then illuminates the animal cells.”
Artistic rendering of the new super resolution microscopy technology. Animal cells (red) are mounted on a slide coated with the multilayer hyperbolic metamaterial. Nanoscale structured light (blue) is generated by the metamaterial and then illuminates the animal cells.”
Illustration: Yeon Ui Lee – University of California San Diego

To create what’s known as a “super-resolution microscope” the engineers didn’t actually upgrade the microscope at all. Instead, they developed a hyperbolic metamaterial—materials with unique structures that manipulate light, originally developed to improve optical imaging—that’s applied to a microscope slide, onto which the sample is placed. This particular hyperbolic metamaterial is made from “nanometers-thin alternating layers of silver and silica glass” which have the effect of shortening and scattering the wavelengths of visible light that pass through it, resulting in a series of random speckled patterns.

Those speckled light patterns end up illuminating the sample sitting on the microscope slide from different angles, allowing a series of low-resolution images to be captured, each highlighting a different part. Those images are then fed into a reconstruction algorithm which intelligently combines them and spits out a high-resolution image.

Comparison of images taken by a light microscope without the hyperbolic metamaterial (left) and with the hyperbolic metamaterial (right): quantum dots.
Comparison of images taken by a light microscope without the hyperbolic metamaterial (left) and with the hyperbolic metamaterial (right): quantum dots.
Image: University of California San Diego

It’s not unlike the sensor-shift approach used in some digital cameras to produce super-resolution photos where the image sensor is moved ever so slightly in various directions while multiple images are captured and then combined to merge all of the extra details captured. This technology—detailed in a paper recently published in the Nature Communications journal—can boost a conventional light microscope’s resolution to 40 nanometers, while still allowing living organisms to be observed. It still can’t compete with what electron microscopes are capable of, but it’s no less remarkable given how easily it can improve the capabilities of more affordable and safer hardware already in use in labs all around the world.

Source: Simple Slide Coating Gives a Boost to the Resolution of a Microscope

A.I. used at sea for first time off coast of Scotland to engage threats to ships

For the first time, Artificial Intelligence (A.I.) is being used by the Royal Navy at sea as part of Exercise Formidable Shield, which is currently taking place off the coast of Scotland.

This Operational Experiment (OpEx) on the Type 45 Destroyer (HMS Dragon) and Type 23 Frigate (HMS Lancaster), is using the A.I. applications, Startle and Sycoiea, which were tested against a supersonic missile threat.

As part of the Above Water Systems programme, led by Defence Science and Technology Laboratory (Dstl) scientists, the A.I. improves the early detection of lethal threat, accelerates engagement timelines and provides Royal Navy Commanders with a rapid hazard assessment to select the optimum weapon or measure to counter and destroy the target.

[…]

As outlined in the recent Defence Command Paper, the MOD is committed to investing in A.I. and increased automation to transform capabilities as the Armed Forces adapt to meet future threats, which will be supported by the £24bn uplift in defence spending over the next four years.

HMS Lancaster and HMS Dragon are currently trialling the use of A.I. as part of a glimpse into the future of air defence at sea.

HMS Lancaster’s Weapon Engineer Officer, Lieutenant Commander Adam Leveridge said:

Observing Startle and Sycoiea augment the human warfighter in real time against a live supersonic missile threat was truly impressive – a glimpse into our highly-autonomous future.

[…]

Source: A.I. used at sea for first time off coast of Scotland – GOV.UK

This Is What Pilots Actually See Inside Red 6’s Augmented Reality Dogfighting Goggles

Augmented reality systems are on the verge of making a huge impact on how America’s military fights and trains. When it comes to the latter, one company, aptly named Red 6, has identified an inflection point where cost and existing capabilities become problematic for America’s tactical aircraft communities—training for air-to-air combat. While contractor aggressor services have ballooned in recent years to bring down the cost of providing bad guys for frontline fighter pilots to train against, while also upping the potential density and complexity of the threats that can be portrayed, Red 6 thinks it can do much of this without any other jets, pilots, and millions in yearly fuel costs all. This can be accomplished by moving the adversary aircraft into the synthetic realm via augmented reality goggles. Now we finally get to see exactly what the pilots do when donning Red 6’s increasingly capable helmet-mounted hardware.

You can read all about Red 6, where the company has been, and where it plans to go, in this in-depth feature interview with its founder and former F-22 Raptor pilot, Daniel Robinson. In it, he talks about how Red 6 started out by creating a huge geometric open-sided cube in the sky to test the original idea and has progressed with better hardware and software ever since. The tech has developed to the point where pilots are actually dogfighting synthetic AI-enabled fighters in augmented reality using Red 6’s gear. And, of course, without any actual flying hardware constraints, any aircraft with any performance capabilities can be accurately represented. So what does this look like from the pilot’s perspective? We can finally share the answer to that question below:

Red 6’s system is called the Airborne Tactical Augmented Reality System (ATARS). The company officially describes ATARS as “the first wide field-of-view, full color, demonstrably proven outdoor Augmented Reality solution that works in dynamic outdoor environments. ATARS allows Virtual and Constructive assets into the real world by allowing pilots and ground operators to see synthetic threats in real-time, outdoors. and critically, in high-speed environments. By blending Augmented Reality and artificial intelligence and using both the indoor and outdoor space around us as a medium, Red 6 has redefined the limits of how the world will experience, share, and interact with its information.”

Red 6

CEO Daniel Robinson donning an ATARS for a test flight.

Red 6, which just closed a $30M Series A financing round, with the vast majority of those funds coming from Snowpoint Ventures, is on the attack and plans on spreading its innovations into other combat domains in the future, just as we discussed in our big interview piece. Still, the potential for this system to revolutionize one of the most costly aspects of preparing for modern warfare—air-to-air combat training—is becoming very real. The savings from introducing this system, even to a limited degree, for some recurrent air-to-air training would be massive in terms of all the costs involved, including the wear-and-tear these training flights impose on the adversary aircraft, which is usually a similar fighter from the unit’s own squadron.

The company scored another big win last March when Dr. William Roper, who left his previous job as Assistant Secretary of the Air Force for Acquisition, Technology, and Logistics earlier this year, and is considered a highly influential visionary by some, joined Red 6’s advisory board. This vote of confidence from one of the Pentagon’s leading minds on airpower definitely helped the company’s position as a potential major market disruptor.

As for what comes next, Red 6 is about to enter into phase three of their Small Business Innovation Research (SBIR) initiative with AFWERX, which will see ATARS deployed aboard T-38 Talon trainers of the 586th Flight Test Squadron at Holloman Air Force Base in New Mexico. There, Air Force pilots will put ATARS through its paces. The next step will be integrating it into an F-16 Viper fighter jet, which will bring another level of challenging performance to the concept.

USAF

586th Flight Test Squadron T-38 over White Sands Missile Range.

At the very least, we can hope that allied air forces will have another tool to better and more efficiently train their pilots where applicable in the not so distant future, and in essence, augmenting the reality of their training capabilities.

Source: This Is What Pilots Actually See Inside Red 6’s Augmented Reality Dogfighting Goggles