*Quest 1, 2, pro standalone only atm, PCVR coming soon*
Touchly lets you watch any VR180 video in 6dof and interact with the environment. Standard playback in most VR formats is also supported.And it’s out now for free in the App Lab! https://www.oculus.com/experiences/quest/5564815066942737/
Note: Videos need to be processed with our converter beforehand to be seen in volumetric mode.
It requires both left and right videos to generate the depth map. I’m not sure if that requires a ML model or can be done with regular video filtering algorithms.
The video is preprocessed with the depthmap added as a “third view” in a SBS video. So speed isn’t an issue.
[…] The standard version of the Omega Seamaster Diver 300M 60 Years Of James Bond watch features a design that aBlogtoWatch describes as, “a blend between the original Omega Seamaster Diver 300M that appeared in GoldenEye and the latest edition from No Time To Die.” In other words, it’s a not an exact recreation of the piece that Brosnan wore in GoldenEye, but incorporates elements from several watches featured in various Bond films. On the front, the only hint that this watch is in any way Bond themed is the number 60 appearing at top of the dial, where there is normally a triangle.
It’s only when you flip the watch over that its Bond theming is far more apparent. The caseback features a sapphire glass window revealing an animation recreating the iconic opening of Bond films where the silhouetted character walks on screen as seen through the barrel of a gun. But there’s no LCD or OLED screens here. The Seamaster Diver 300M is a purely mechanical timepiece, so to create the animation, Omega leveraged the moiré effect where interference patterns from spiral patterns on spinning discs reveal the sequence of a simple four-frame animation of Bond walking in. And because the animation mechanism is tied to the watch’s moving second-hand, it perpetually plays in a loop as long as the watch has power and is keeping time.
OMEGA Seamaster Diver 300M 60 Years Of James Bond – Stainless Steel
It’s a fun design element not only because of how subtly it’s executed, but also how it leverages what makes traditional timepieces appealing to many collectors: the complicated mechanics inside that make them work. Unfortunately, with a $7,600 price tag, the Seamaster Diver 300M 60 Years Of James Bond is not really affordable for most Bond fans.
Despite VR having been hyped up for the last couple of years, not very much has happened in the past two years. The hardware has not really refreshed, but this year at least one new exciting entry has come in and another is promised. Search results of reviews usually have the same group of suspects but usually leave out two important companies that are definitely worth a view. Surprisingly, setting up your VR headset is not a question of plug and play. It’s a bit finicky and takes some time. Games need to be optimised and you will run into strange new terms and things you need to run (Windows Mixed Reality, SteamVR, Windows Mixed Reality for SteamVR, OpenXR) and settings you need to optimise per game. This article offers a primer on that. Despite this, the experience in games is quite amazing!
First you need to make a decision on how you want to use your VR goggles. They come in the types Tethered (which has a cable connected to the PC) or untethered, which uses wireless communication of some sort to send the image signal.
The biggest advantage of tethered is that the cable data throughput is much much higher, allowing for much more detail and higher framerates (which are important for some games, especially simulators. If you are going to use your VR headset in Flight Simulator 2022, Elite Dangerous, Star Citizen, Star Wars Squadrions, driving sims, etc you really will need a tethered headset). The disadvantage is that walking around can be a bit more tricky as there is a cable to mind. Considering the length of cables (6m +) this doesn’t have to be a problem, especially if you are sitting down. There are also pulley arrangements available to have the cable come off the ceiling if you don’t mind how that looks.
The biggest advantage of untethered is that you can wander around easily without tripping on a cable.
Speaking of wandering around, one of the first things you do when you install the headset is set up a border with your headset delineating where you can and can’t walk so you won’t bump into things like your walls, chairs, desk, etc.
Most manufacturers also have a “pro” version which is better. As this article is for gaming, I will leave these out.
Options and Specifications
Then come a plethora of options to look at. For the specifications, higher is usually better (unless you are talking about latency and weight). You do pay for the privilege though:
Resolution – be careful, sometimes it’s a per eye resolution, sometimes it’s a total resolution for both eyes. Sometimes there is just one display and sometimes there are two displays (one for each eye). Two is better.
Field of View (FOV) – this can be both vertical and horizontal and is expressed as an angle.
Camera system – some VR sets (the Quest 2 and the Pico 4) have a camera mounted on the helmet so you can “see” through the headset when turned on (Passthrough). The Pico 4 is colour and very good, the Quest 2 is black and white. Some VR sets offer eye tracking inside your headset. Some systems use these camera’s to see the controllers as a tracking system. (see video from 13 minutes)
Tracking system – an external tracking system (base station) is best (but takes up space) and your controllers won’t lose tracking so often. Camera’s on the headset can be confused if it is too dark or light or if you swing your controllers out of the field of view.
Controllers – some people prefer some controllers to others, eg the HP Reverb G2 has a bad reputation for it’s controllers and the Pico 4 design is praised. Sometimes you can use other system’s controllers, eg you can use the HTC Vive controllers on the HP Reverb G2 and the Valve Index. Check to see if the controllers are in the box you buy (if you want them. If you’re upgrading headset you may not want them).
Data throughput – is the data throughput sufficient for your needs?
Refresh rate
Peak Pixel Density (PPD) – Readability on the screen. Some screens are sharper than others
Glare on the screen
Amount of light bleed – light can get into the headset, which is a distraction. How well does the foam sit around your face.
Comfort of the headband – also a function of foam, how easy the straps are to adjust
Weight and balance – a heavier headset can be more comfortable than a lighter one if the headband is more comfortable and better balanced. I haven’t put weight in the table as this is a very subjective experience.
Interpupillary Distance (IPD) or eye seperation configuration – is it easy to adjust this to your eyes?
Software in the ecosystem – Meta has spent some time gaining exclusive software for the Quest 2 to entice you to buy their hardware, so if you buy something else you won’t be able to play their games. the PS5VR system only works on a Playstation 5.
If you wear glasses, check the size of the glasses spacer – sometimes you can find aftermarket spacers.
Sound quality / Microphone
Ease of setup!
I have a comparison table at the end.
The Headsets
I have divided this into 2 parts – the standard list you will have seen everywhere, the extended list contains headsets not so frequently indexed by Google.
The standard list:
Meta Quest 2 for EUR 449,-
Until the coming of the Pico 4 this was the ‘best value’ option. However, you are being tracked in everything you do by Facebook – it requires a Facebook account login, so for me personally, this makes it a no go. It’s a few years old by now and a bit outdated. Enough said.
The affordable option to for the low end of the market. Tethered. $449 headset only, full kit $749.
HTC Vive Pro 2
The better VR Set. This is the high spec standard unit (but not the highest spec on paper!). Tethered. The controllers are often used by owners of the Valve Index and the HP Reverb G2. $799 without kit, $1399 with base station and 2 controllers. You can buy trackers for your arms and legs seperately. Using a wifi kit can be turned into an untethered unit.
Valve Index
The upper midrange unit. Tethered with base station. $1079,- for the full kit, $539,- only the headset.
The extended list
Pico 4
The newest addition to this list – and everyone is raving about it. The new (2022) technology is a step up for everyone. Untethered (unfortunately, as I’m a simmer!). $429,- with 128 GB, $499,- with 256 GB. You only need the extra memory if you want to load games from the eco system on the device. If you PC game apparently this is not necessary. Also see the video above if you want to know more about this device.
Note: It’s a Chinese product created by ByteDance – the owner of TikTok. Whilst there is no proof that I have found yet that this is a data grabbing monster (but please correct me if I am wrong) there is plenty of fingerpointing at ByteDance and TikTok is!
HP Reverb G2v2
Tethered. A very good upper mid range with the sharpest screen and best audio. A very popular choice for simming. $650,- for the complete set. Make sure you get a v2 version – you can recognise this by the cable having a box on it with a button to turn it on and off and the headset itself having 2 magnetically removable pieces (glasses spacers) in front of the screen – they also look different
Left is the G2V2, right is the G2V1
There is a problem with the cable guide which in some cases makes it snap in half. You can contact HP for a RMA for this. There are rumors that HP is getting out of the VR business.
Varjo Aero
The absolute top end, tethered. EUR 1999,-.
Pimax 5K Super
Great specs, but apparently setup is fiddly. EUR 641,- and EUR 289,- for the controllers. Optional hand and eye tracking modules and I am unsure if you need to buy the headphones seperately.
Pimax 8K X
Great specs, but apparently setup is fiddly. $1179,- and EUR 289,- for the controllers. Optional hand and eye tracking modules and I am unsure if you need to buy the headphones seperately.
Pimax 12K
To be released. Hopefully.
Specifications Table
HTC Cosmos Elite
HTC Vive Pro2
Valve Index
Pico 4
HP Reverb G2V2
Varjo Aero
Pimax 5K Super
Pimax 8K X
Resolution
1440 x 1700 pixels per eye (2880 x 1700 pixels combined)
2448 × 2448 pixels per eye (4896 x 2448 pixels combined)
dual 1440×1600 RGB LCDs
2160×2160 per-eye
2160 x 2160 pixels per eye (4320 x 2160 pixels combined). RGB sub-pixels
Dual Mini LED LCD; 2880 x 2720 px per eye
2560 X 1440 pixels per eye (5120 X 1440 pixels combined)
3840 X 2160 pixels per eye (7680 X 2160 pixels combined)
Field of View
Up to 110 degrees
Up to 120 degrees (horizontal)
Optimized eye relief adjustment allows a typical user experience 20º more than the HTC Vive
105 degrees
114 degrees
Horizontal: 115° Diagonal: 134° at 12 mm eye relief
Diagonal 200 degrees
Diagonal 200 degrees
Refresh Rate
90 Hz
90/120 Hz (only 90Hz supported via VIVE Wireless Adapter)
80/90/120/144Hz (144Hz experimental)
72Hz / 90 Hz
90Hz
90Hz
90/120/144/160/180Hz* *Higher refresh rates are only available at lower FOV settings.
60/75/90Hz (native mode) 110Hz (upscaling mode)
Tracking system
6DoF Inside-out Tracking
SteamVR™ Base Station Tracking 2.0
SteamVR 2.0 sensors, compatible with SteamVR 1.0 and 2.0 base stations
6 DoF positioning system
HP Reverb G2 inside/out 6 DOF motion tracking, gyroscope, accelerometer, and magnetometer
SteamVR™ 2.0/1.0 Eye tracking 200 Hz with sub-degree accuracy; 1-dot calibration for foveated rendering
G-sensor, gyroscope, SteamVR 1.0 and 2.0 Tracking System
G-sensor, gyroscope, SteamVR 1.0 and 2.0 Tracking System
Headphone
Stereo Headphone
Hi-Res certified headset (via USB-C analog signal) Hi-Res certified headphones (removable) High impedance headphones support (via USB-C analog signal)
Built-in: 37.5mm off-ear Balanced Mode Radiators (BMR), Frequency Response: 40Hz – 24KHz, Impedance: 6 Ohm, SPL: 98.96 dBSPL at 1cm.
USB 3.0 (or later), DP 1.2, Proprietary Connection to Faceplates
Bluetooth, USB-C port for peripherals, DP 1.2 (DP 1.4 required for full resolution)
5m tether, 1m breakaway trident connector. USB 3.0, DisplayPort 1.2, 12V power, Aux Headphone Out 3.5mm
DisplayPort™ 1.3, USB 3.0 type C, power adapter
Headset adapter and USB-C cable (5-metre) in-box PC connections: DisplayPort and USB-A 3.0
1 x DisplayPort 1.4 1 x USB 3.0 Type A 1 x USB 2.0 Type A
1 x DisplayPort 1.4 1 x USB 3.0 Type A 1 x USB 2.0 Type A
IPD
Adjustable Eye Comfort Setting (IPD)
Adjustable IPD range of 57-70mm
58mm – 70mm range physical adjustment
62 – 72mm best adjustment system
64mm +/- 4mm by hardware slide
Automatic IPD adjustment with motor Supported IPD range: 57–73 mm
60mm – 70mm range physical adjustment ± 2mm with software adjustment
60mm – 70mm range physical adjustment ± 2mm with software adjustment
Camera
Stereo 960 x 960 pixel, global shutter, RGB (Bayer)
2 front-facing cameras and 2 side-facing cameras,
PPD
20.6
35
Software Setup
When you set up a VR headset, you will need to download and install Windows Mixed Reality from the Windows App Store. After setup You most likely will need to install SteamVR. SteamVR allows you to play games, even if they were not bought in the Steam Store (eg in the Epic store). You will also need to install Windows Mixed Reality for Steam. https://learn.microsoft.com/en-us/windows/mixed-reality/enthusiast-guide/using-steamvr-with-windows-mixed-reality.
Do you need to install OpenXR? Use OpenXR From your computer, open the SteamVR app Head to Settings Select Show in Advanced Settings Head to the Developer tab Set Current OpenXR runtime as “OpenXR runtime”
Sign up for betas
This is advised by Microsoft in their guide https://learn.microsoft.com/en-us/windows/mixed-reality/enthusiast-guide/using-steamvr-with-windows-mixed-reality
In Steam, use the drop-down under the Library menu to filter to Tools. In the list, right-click SteamVR and select Properties. Select the Betas tab. Opt in to “beta – public beta” and select Close to confirm. The beta access code field should be left blank.
In Steam, use the drop-down under the Library menu to filter to Software. In the list, right-click Windows Mixed Reality for SteamVR and select Properties. Select the Betas tab. Opt in to “beta – public beta” and select Close to confirm. The beta access code field should be left blank.
Optimising your Graphics settings
Motion Reprojection
With it entirely off there is a bit of stuttering, but detail clarity is very sharp. With it on motion is fluid
Disable overlays
Epic: C:\Program Files (x86)\Epic Games\Launcher\Portal\Extras\Overlay and rename or move the two files: EOSOverlayRenderer-Win64-Shipping.exe EOSOverlayRenderer-Win32-Shipping.exe
Steam: settings>In Game>Enable Steam Overlay while in-game UNCHECK
XBOX: Disable the Xbox Game Bar overlay (yes on windows) Enter windows settings from the start menu, Select Gaming -> Xbox Game Bar -> Toggle the overlay to the off position
https://forums.flightsimulator.com/t/crash-to-desktop-without-error-message/130085 – limit fps in nvidia control panel
https://forums.flightsimulator.com/t/crash-to-desktop-without-error-message/130085/3244 – The HP Reberb G2 goes to sleep after a while despite the change in the registry, and to have prevent the sleep in the device manager. I switch it to VR and it starts again. We are now at 4 hours of flight. And 0 CTD In Device Manager → Universal Serial Bus (USBs) controller go through each device and in the “Power Management Options” tab uncheck “Allow the computer to turn off this device”. SteamVR settigns Startup/Shudown
https://www.reddit.com/r/HPReverb/comments/xo5v2z/holographicshell_processwindows_11_performance/ – Run cmd/terminal and paste ‘logman query HolographicShell -ets’ to see if it’s running. If it is, end it using ‘logman stop HolographicShell -ets’ and check
Can’t see steamVR settings – click on icon in taskbar, right click on settings window, select ‘move’ use the keyboard arrows to move it to main display
If you have a large monitor you can run into the problem that your monitor will move all the icons to the top left when it turns off. To stop this you either need to get an EDID pass through adapter, but a hdmi edid pass through adapter has to work for the given resolution as well as the refresh rate – and for > 60Hz at 4k (HDMI 2.0 specs) must be HDMI 2.1 compatible. There is not much in the >4K@60Hz space and what is there, is expensive.
It’s been nearly a decade since the Pebble smartwatch started shipping to backers of its wildly successful initial Kickstarter campaign, but there’s still life in the ol’ dog yet. The wearables are now compatible with Pixel 7 and Pixel 7 Pro, as well as 64-bit-only Android devices that will arrive later.
As noted by Ars Technica, Katharine Berry, who works on Wear OS and is a prominent member of the Rebble group that’s keeping the Pebble ecosystem alive, wrote that the latest Pebble update comes four years after the previous one. The last update allowed for many of the Pebble app’s functions to run on independent servers. Fitbit, which Google has since bought, shut down Pebble’s servers in 2018, two years after buying some of the smartwatch maker’s assets.
Along with Pixel 7 compatibility, the latest update also improves Caller ID reliability on recent versions of Android. While the app isn’t available on the Google Play Store, the APK is signed with official Pebble keys and retains Google Fit integration, Berry noted.
It’s amazing how amazed the writer of this article is that there are still updates for 10 year old hardware. Shouldn’t it be the norm that hardware is supported for as long as it works – and that should be in the 30/40 year range instead of the 2/3 year range?
The study, published today in Science, was led by Finland’s Aalto University and resulted in a powerful, ultra-tiny spectrometer that fits on a microchip and is operated using artificial intelligence.
The research involved a comparatively new class of super-thin materials known as two-dimensional semiconductors, and the upshot is a proof of concept for a spectrometer that could be readily incorporated into a variety of technologies—including quality inspection platforms, security sensors, biomedical analyzers and space telescopes.
[…]
Traditional spectrometers require bulky optical and mechanical components, whereas the new device could fit on the end of a human hair, Minot said. The new research suggests those components can be replaced with novel semiconductor materials and AI, allowing spectrometers to be dramatically scaled down in size from the current smallest ones, which are about the size of a grape.
[…]
The device is 100% electrically controllable regarding the colors of light it absorbs, which gives it massive potential for scalability and widespread usability
[…]
In medicine, for example, spectrometers are already being tested for their ability to identify subtle changes in human tissue such as the difference between tumors and healthy tissue.
For environmental monitoring, Minot added, spectrometers can detect exactly what kind of pollution is in the air, water or ground, and how much of it is there.
[…]
“If you’re into astronomy, you might be interested in measuring the spectrum of light that you collect with your telescope and having that information identify a star or planet,” he said. “If geology is your hobby, you could identify gemstones by measuring the spectrum of light they absorb.”
Scientists with the University of Chicago have discovered a way to create a material that can be made like a plastic, but conducts electricity more like a metal.
The research, published Oct. 26 in Nature, shows how to make a kind of material in which the molecular fragments are jumbled and disordered, but can still conduct electricity extremely well.
[…]
fundamentally, both of these organic and traditional metallic conductors share a common characteristic. They are made up of straight, closely packed rows of atoms or molecules. This means that electrons can easily flow through the material, much like cars on a highway. In fact, scientists thought a material had to have these straight, orderly rows in order to conduct electricity efficiently.
Then Xie began experimenting with some materials discovered years ago, but largely ignored. He strung nickel atoms like pearls into a string of of molecular beads made of carbon and sulfur, and began testing.
To the scientists’ astonishment, the material easily and strongly conducted electricity. What’s more, it was very stable. “We heated it, chilled it, exposed it to air and humidity, and even dripped acid and base on it, and nothing happened,” said Xie. That is enormously helpful for a device that has to function in the real world.
But to the scientists, the most striking thing was that the molecular structure of the material was disordered. “From a fundamental picture, that should not be able to be a metal,” said Anderson. “There isn’t a solid theory to explain this.”
Xie, Anderson, and their lab worked with other scientists around the university to try to understand how the material can conduct electricity. After tests, simulations, and theoretical work, they think that the material forms layers, like sheets in a lasagna. Even if the sheets rotate sideways, no longer forming a neat lasagna stack, electrons can still move horizontally or vertically—as long as the pieces touch.
The end result is unprecedented for a conductive material. “It’s almost like conductive Play-Doh—you can smush it into place and it conducts electricity,” Anderson said.
The scientists are excited because the discovery suggests a fundamentally new design principle for electronics technology. Conductors are so important that virtually any new development opens up new lines for technology, they explained.
One of the material’s attractive characteristics is new options for processing. For example, metals usually have to be melted in order to be made into the right shape for a chip or device, which limits what you can make with them, since other components of the device have to be able to withstand the heat needed to process these materials.
The new material has no such restriction because it can be made at room temperatures. It can also be used where the need for a device or pieces of the device to withstand heat, acid or alkalinity, or humidity has previously limited engineers’ options to develop new technology.
Lenovo has staged its annual Tech World gabfest and teased devices with rollable OLED screens that shrink or expand as applications demand.
The company emitted the video below to show off its rollables. We’ve embedded and set the vid to start at the moment the rollable phone is demoed. The rollable laptop demo starts at the 53 second mark.
Lenovo has offered no explanation of how the rollables work, and the video above does not show the rear of the prototype rollable smartphone and laptop.
Social media is abuzz lately over the prospect of cheating in tournament strategy games. Is it happening? How is that possible with officials watching? Could there be a hidden receiver somewhere? What can be done to rectify this? These are probing questions!
We’ll get to the bottom of this by making a simple one-way hidden communicator using Adafruit parts and the Adafruit IO service. Not for actual cheating of course, that would be asinine…in brief, a stain on the sport…but to record for posterity whether this sort of backdoor intrusion is even plausible or just an internet myth.
Nokia T10 tablet has been officially launched by the company via a press release. It is the second tablet by Nokia’s new home, HMD Global, on the market. The device is being touted as a sturdy and portable Android slate with multiple years of software upgrades. The Nokia T10 has arrived as a mid-range Android tablet for global markets.
Specifications, Features
The Nokia T10 tablet comes with an 8-inch HD display. The slate boots Android 12 out-of-the-box. It will be getting two years of major Android OS updates and at least three years of monthly security updates for Android. The slate is powered by the Unisoc T606 processor, which is accompanied by up to 4GB of RAM and 64GB of internal storage. There also are dual stereo speakers with OZO playback to provide an immersive media experience.
[…]
The device has an 8MP primary shooter and a 2MP selfie camera, which supports face unlock functionality. In the connectivity department, the Nokia T10 comes with 4G LTE, dual-band Wi-Fi, Bluetooth, GPS with GLONASS, and a built-in FM radio receiver.
Lastly, the slate is fuelled by a beefy 5,250 mAh battery, which supports 10W charging technology. Nokia T10
Price, Availability The Nokia T10 Android tablet’s base variant will be available from $159
[…] At the moment, robots are sometimes coated in silicone rubber to give them a fleshy appearance, but the rubber lacks the texture of human skin, he says.
To make more realistic-looking skin, Takeuchi and his colleagues bathed a plastic robot finger in a soup of collagen and human skin cells called fibroblasts for three days. The collagen and fibroblasts adhered to the finger and formed a layer similar to the dermis, which is the second-from-top layer of human skin.
Next, they gently poured other human skin cells called keratinocytes onto the finger to recreate the upper layer of human skin, called the epidermis.
The resulting 1.5-millimetre-thick skin was able to stretch and contract as the finger bent backwards and forwards. As it did this, it wrinkled like normal skin, says Takeuchi. “It is much more realistic than silicone.”
The robot skin could also be healed when it was cut by grafting a collagen sheet onto the wound.
However, the skin began to dry out after a while since it didn’t have blood vessels to replenish it with moisture.
In the future, it may be possible to incorporate artificial blood vessels into the skin to keep it hydrated, as well as sweat glands and hair follicles to make it more realistic, says Takeuchi.
It should also be possible to make different skin colours by adding melanocytes, he says.
The researchers now plan to try coating a whole robot in the living skin. “But since this research field has the potential to build a new relationship between humans and robots, we need to carefully consider the risks and benefits of making it too realistic,” says Takeuchi.
Over the last day or two, there have been a growing number of reports by people who own certain Canon Pixma printers that the devices either won’t turn on at all or, once turned on, get stuck in a reboot loop, cycling on and off as long as they’re plugged in. Verge reader Jamie pointed us to posts on Reddit about the problem and Canon’s own support forum, citing problems with models including the MX490, MX492, MB2010, and MG7520.
Some believe their problem is due to a software update Canon pushed to the printers, but that hasn’t been confirmed yet. In response to an inquiry from The Verge, corporate communications senior director and general manager Christine Sedlacek said, “We are currently investigating this issue and hope to bring resolution shortly as customer satisfaction is our highest priority.”
Until there is an official update or fix, some people in the forums have found that disconnecting the printers from the internet is enough to keep them from rebooting, with control still possible via USB.
To get the printers to work while maintaining your connection to the internet and their connection to local network devices, one reply from a customer on Canon’s support forum suggests a method that many people report has worked for them. If you’re experienced with network setups, DNS servers, and IP addresses, it could be worth trying, but for most people, I’d recommend waiting for an official solution.
To follow their steps, then, after taking your internet offline, turn on the printer, go into its network settings, and, under web service setup, select DNS server setup and choose manual setup. In that section, input an internal network address (192.168.X.X, with numbers replacing X that aren’t in use by any other devices on your local network), press “OK,” and then press “no” for a secondary DNS server. This keeps the printer connected to your router without accessing the wider internet, and, for some reason, has been enough to stop the devices from rebooting.
Water scarcity is a major problem for much of the world’s population, but with the right equipment drinking water can be wrung out of thin air. Researchers at the University of Texas at Austin have now demonstrated a low-cost gel film that can pull many liters of water per day out of even very dry air.
The gel is made up of two main ingredients that are cheap and common – cellulose, which comes from the cell walls of plants, and konjac gum, a widely used food additive. Those two components work together to make a gel film that can absorb water from the air and then release it on demand, without requiring much energy.
First, the porous structure of the gum attracts water to condense out of the air around it. The cellulose, meanwhile, is designed to respond to a gentle heat by turning hydrophobic, releasing the captured water.
Making the gel is also fairly simple, the team says. The basic ingredients are mixed together then poured into a mold, where it sets in two minutes. After that it’s freeze-dried, then peeled out of the mold and ready to get to work. It can be made into basically any shape needed, and scaled up fairly easily and at low-cost.
The gel film can be cut and molded into whatever shape is needed
University of Texas at Austin
In tests, the gel film was able to wring an astonishing amount of water out of the air. At a relative humidity of 30 percent, it could produce 13 L (3.4 gal) of water per day per kilogram of gel, and even when the humidity dropped to just 15 percent – which is low, even for desert air – it could still produce more than 6 L (1.6 gal) a day per kilogram.
Without adding any hardware that actually makes contact with the wearer’s face, researchers from Carnegie Mellon University’s Future Interfaces Group have modified an off-the-shelf virtual reality headset so that it recreates the sensation of touch in and around a user’s mouth, finally fulfilling virtual reality’s inevitable one true purpose.
Aside from handheld controllers that occasionally vibrate, most consumer-ready virtual reality devices ignore senses like taste, smell, and touch, and instead focus on visuals and sounds. It’s enough to make virtual reality experiences far more compelling than they were decades ago, but not enough to truly fool the brain into thinking that what your eyes are seeing is possibly a real-life experience.
Researchers working to evolve and improve virtual reality hardware have come up with some truly unique hardware and accessories over the years to make virtual reality feel as real as it looks, but none truly reflect where virtual reality is inevitably going like the research being done at Carnegie Mellon University in regards to mouth haptics. You might not be able to reach out and feel realistic fur on a virtual dog just yet, but experiencing the sensation of drinking from a virtual drinking fountain could be just around the corner—in addition to other experiences that don’t require too much imagination.
The researchers upgraded what appears to be a Meta Quest 2 headset with an array of ultrasonic transducers that are all focused on the user’s mouth, and it works without the need for additional accessories, or other hardware set up around the wearer. We’ve seen ultrasonic transducers used to levitate and move around tiny particles by blasting them with powerful sound waves before, but in this application, they create the feeling of touch on the user’s lips, teeth, and even their tongue while their mouth is open.
A giant virtual spider rains down a flood of poison on the user which they can feel splashing across their lips.
The transducers can do more than just simulate a gentle touch. By pulsing them in specific patterns, they can recreate the feeling of an object sliding or swiping across the lips, or persistent vibrations, such as the continuous splashing of water when leaning down to sip from a virtual drinking fountain.
The researchers have come up with other custom virtual reality experiences that demonstrate how their mouth haptics hardware can introduce more realism, including a hike through a spooky forest where spider webs can be felt across the face, a race where the user can feel the wind in their face, and even virtual eating experiences where food and drinks can be felt inside the mouth. But if and when someone runs with this idea and commercializes the mouth haptics hardware, we’re undoubtedly going to see the world’s first virtual reality kissing booth realized, among other experiences the researchers are probably wisely tip-toeing around.
[…] MIT have developed a paper-thin speaker that can be applied to almost any surface like wallpaper, turning objects like walls into giant noise-cancelling speakers.
[…]
Researchers at MIT’s Organic and Nanostructured Electronics Laboratory have created a new kind of thin-film speaker that’s as thin and flexible as a sheet of paper, but is also able to generate clear, high-quality sound, even when bonded to a rigid surface like a wall. This is not the first time researchers have created ultra-thin lightweight speakers, but previous attempts have resulted in a film that needs to be freestanding and unencumbered to produce sound. When mounted to a rigid surface, past thin speakers’ ability to vibrate and move air is greatly reduced, which limits where and how they can be used. But MIT’s researchers have now come up with a new manufacturing process that solves that problem.
Instead of designing a thin-film speaker that requires the entire panel to vibrate, the researchers started with a sheet of lightweight PET plastic that they perforated with tiny holes using a laser. A layer of thin piezoelectric material called PVDF was then laminated to the underside of the sheet, and then the researchers subjected both layers to a vacuum and 80 degrees Celsius heat, which caused the piezoelectric layer to bulge and push through the laser-cut holes in the top layer. This created a series of tiny domes that are able to pulse and vibrate when an electric current is applied, regardless of whether or not the panel is bonded to a rigid surface. The researchers also added a few extra layers of the durable PET plastic to create a spacer to ensure that the domes can vibrate freely, and to protect them from abrasion damage.
The domes are just “one-sixth the thickness of a human hair” in height and move a mere half micron up and down when they vibrate. Thousands are needed to produce audible sounds, but the researchers also discovered that changing the size of the laser-cut holes, which also alters the size of the domes produced, allows the sound produced by the thin-film panel to be tuned to be louder. Because the domes have such minute movement, just 100 milliwatts of electricity were needed to power a single square meter of the material, compared to more than a full watt of electricity needed to power a standard speaker to create a comparable level of sound pressure.
Addressable LED strings have made it easier than ever to build fun glowable projects with all kinds of exciting animations. However, if you’re not going with a simple grid layout, it can be a little difficult to map your strings out in code. Fear not, for [Jason Coon] has provided a tool to help out with just that!
[Jason]’s web app, accessible here. is used for mapping out irregular layouts when working with addressable LED strings like the WS2812B and others that work with libraries like FastLED and Pixelblaze. If you’re making some kind of LED globe, crazy LED tree, or other non-gridular shape, this tool can help.
The first step is to create a layout of your LEDs in a Google Sheets table, which can then be pasted into the web app. Then, the app handles generating the necessary code to address the LEDs in an order corresponding to the physical layout.
[Jason] does a great job of explaining how the tool works, and demonstrates it working with a bowtie-like serpentine layout with rainbow animations. The tool can even provide visual previews of the layout so you can verify what you’ve typed in makes sense.
the Mictic One are two Bkuetooth bracelets equipped with movement sensors. The bracelets connect to a mobile device (only iOS at the moment, but the Android version is under development). From the Mictic application, we can select different musical instruments and control the sound they produce by moving our hands and arms. Think of an Air Guitar on steroids and you’ll get an idea of how they work. This video helps too.
The fact is that to say that the Mictic One is an Air Guitar simulator is an understatement, because the application of this startup created in Zurich does much more than that. To begin with, the range of musical instruments that we can imitate is quite wide and ranges from the cello to percussion or a DJ’s mixing desk. Each instrument requires you to make different movements with your arms and hands that mimic (to some extent) the actual movements you would make with that instrument.
The app allows you to add (and control) background tracks, and even mix various instruments and record the results. In fact, up to four pairs of bracelets can be connected in case you want to form an augmented reality band. There are also a handful of actual songs, and the company is already making deals with different record labels to add many more. In fact the device is being sponsored by Moby
[…]
wearing the Mictic One is an experience that is as frustrating as it is exciting. It’s frustrating because getting something out that sounds good is harder than it looks. It is not enough to wave your arms like a crazed ape. You have to move with precision and smoothness. Luckily, each instrument has a video tutorial in which we can learn the basic movements. It’s exciting because when you learn to make them sound the feeling is extremely satisfying.
Soon we will be able to offer you an in-depth review of the device, but the first impression is that they are incredibly fun. The Mictic One (sold as a pair and with a double USB-C cable to charge them both at the same time) are already on sale from the company’s website at a price of 139 Swiss francs (about 135 euros). In the future, the company plans to extend the platform so that it can be used with other devices that do not have the necessary motion sensors, such as mobile phones or smart watches.
We now have data on over 21,000 broken items and what was done to fix them. This information comes from volunteers at our own events and others who use our community repair platform, restarters.net.
Thanks to our partners in the Open Repair Alliance who also collect this kind of data, we were able to include extra data from other networks around the world.
Together, this brought the total to nearly 50,000 broken items.
Want to see this data for yourself? Download the full dataset here
(Note: Links to the datasets that contain fault types are further down this page)
That’s a lot of data. So to analyse it, we focused on three types of products that the European Commission would be investigating:
Printers
Tablets
The batteries that power many of our gadgets.
[…]
Thanks to this collective effort, we were able to identify the most common reasons printers, tablets and batteries become unusable.
These findings are based on the analysis of problems in 647 tablets brought to community repair events, but don’t include 131 tablets with poor data quality, making it impossible to confirm the main fault.
In addition, many of the items we looked at were fairly old, demonstrating that people really want to keep using their devices for longer.
But we also found that there are lots of barriers to repair that make this tricky. Some of the biggest are the lack of spare parts and repair documentation as well as designs that make opening the product difficult without causing extra damage.
You can see our full results and download the data for yourself here:
We want rules that make products easier to fix. And we’re already using data to push for a real Right to Repair. Just recently, we used previous findings to undermine an industry lobbyist’s anti-repair arguments in an EU policy meeting about upcoming regulations for smartphone and tablet repairability.
As a follow up, we also contributed our findings on common fault types in tablets, making the case for the need for better access to spare parts and repair information for this product category as well.
Next, we hope to increase the pressure on European policymakers for regulating printer repairability and battery-related issues in consumer products. For printers, the European Commission is considering rejecting a “voluntary agreement” proposed by industry, which ignores repairability for consumer printers.
And as for batteries, European institutions are working towards a Batteries Regulation, which must prioritise user-replaceability as well as the availability of spare parts.
The Securities and Exchange Commission has launched an investigation into whether Tesla failed to tell investors and customers about the fire risks of its faulty solar panels.
Whistleblower and ex-employee, Steven Henkes, accused the company of flouting safety issues in a complaint with the SEC in 2019. He filed a freedom of information request to regulators and asked to see records relating to the case in September, earlier this year. An SEC official declined to hand over documents, and confirmed its probe into the company is still in progress.
[…]
Tesla started selling and installing solar panels after it acquired SolarCity for $2.6bn in 2016. But its goal of becoming a renewable energy company hasn’t been smooth. Several fires have erupted from Tesla’s solar panels installed on the roofs of Walmart stores, Amazon warehouses, and people’s homes.
In fact, Walmart sued the company in 2019 after seven of its supermarkets in the US caught fire. The lawsuit accused Tesla of “utter incompetence or callousness, or both.” Walmart later dropped its claims, and settled the matter privately.
Before Walmart’s lawsuit, however, Steven Henkes, who was employed as a field quality manager by Tesla after the acquisition, said he attempted to raise concerns about fire risks with managers. He claimed in a lawsuit [PDF] filed last year in November that he was wrongfully terminated after he was fired in August, last year. Henkes claimed his concerns about defects in the company’s solar panels and electrical connectors were repeatedly ignored, and after he filed initial whistleblower complaints with the SEC and the US Consumer Protection Safety Commission (CPSC).
Over 60,000 people as well as over 500 commercial consumers could have been potentially affected by fire risks from Tesla’s faulty solar panels, the lawsuit said. Tesla started replacing and reimbursing defective components in 2019, Business Insider reported. The CPSC has also been investigating the company, too. Tesla did not respond to The Register’s questions.
This could have dramatic consequences for the SiP (Silicon Photonics) — a hot topic for those working in the field of integrated optics. Integrated optics is a critical technology involved in advanced telecommunications networks, and showing increasing importance in quantum research and devices, such as QKD (Quantum Key Distribution) and in various entanglement type experiments (involved in Quantum Compute).
“This is the holy grail of photonics,” says Jonathan Bradley, an assistant professor in the Department of Engineering Physics (and the student’s co-supervisor) in an announcement from McMaster University. “Fabricating a laser on silicon has been a longstanding challenge.” Bradley notes that Miarabbas Kiani’s achievement is remarkable not only for demonstrating a working laser on a silicon chip, but also for doing so in a simple, cost-effective way that’s compatible with existing global manufacturing facilities. This compatibility is essential, as it allows for volume manufacturing at low cost. “If it costs too much, you can’t mass produce it,” says Bradley.
Apple, having long stood in the way of customers who want to fix their own devices, now says it wants to help those who feel they have the right to repair their own products.
On Wednesday the iBiz announced Self Service Repair, “which will allow customers who are comfortable with completing their own repairs access to Apple genuine parts and tools.”
This may be something of a mixed blessing as Apple hardware is notoriously difficult to mend, due to the fact that special tools are often required, parts may be glued together, and components like Apple’s TouchID sensor and T2 security chip can complicate getting devices to work again once reassembled.
Kyle Wiens, CEO of DIY repair community iFixit, told The Register in an email that Apple’s reputation for making difficult to repair products is deserved, particularly for things like AirPods, Apple Pencil, and their keyboards which iFixit has rated 0 out of 10 for repairability.
“Some products that get a 1 are fixable, but it’s really really hard,” said Wiens. “And some like the new MacBook Pro get a 4. Not great but certainly fixable.”
The recently released iPhone 13 received a repairability rating of 5 out of 10. As it happens, Apple last week promised an iOS update to facilitate iPhone 13 screen repair without breaking FaceID.
Initially, Apple will provide more than 200 parts and tools for those determined to conduct common iPhone repairs, such as replacing the display screen, battery, and camera. The program will focus first on iPhone 12 and 13 devices, and will expand later to include M1-based Macs.
Starting early next year, DIY-inclined customers in the US will be able to order Apple-approved parts and tools from the Apple Self Service Repair Online Store – at Apple prices – instead of scouring eBay, Alibaba, and various grey market tool and parts sources. The program is expected to expand internationally at a later date.
A victory for the right to repair
Apple’s about-face follows years of lobbying, advocacy, and regulatory pressure by those who support the right to repair purchased products. Previously, the company said such fiddling represented a security risk. In 2017, the iGiant argued that a right to repair bill under consideration in Nebraska would make the state a Mecca for hackers if it passed.
“This is the clear result of tireless advocacy from the repair community and policy proposals on three continents,” said Wiens. “Right to repair investigations at the FTC and the Australian Productivity Commission are ongoing.
“Consumers deserve the right to repair their own products. Repair manuals should not be secret. We’ve been saying this for a long time, and it’s great to see that Apple finally agrees. We still need to pass legislation and guarantee a level playing field for the entire industry. Apple’s announcement shows that it’s possible to do the right thing. Hopefully Samsung will be next.”
Throughout history, people have devised ways to send information across long distances. For centuries we relied on smoke signals, semaphores, and similar physical devices. Electricity changed everything. First the telegraph and then radio transformed communications. Now researchers at the University of Lancaster have demonstrated another way to send wireless data without using electromagnetic radiation. They’ve harnessed fast neutrons from californium-252 and modulated them with information with 100% success.
The setup was interesting. The radioactive material was encased in a cubic meter steel tank filled with water. A pneumatic system can move the material to one edge of the tank which allows fast neutrons to escape. A scintillating detector can pick up the increased neutron activity. It seems like it is akin to using what hams call CW and college professors call OOK (on off keying). You can do that with just about anything you can detect. A flashlight, knocking on wood, or — we suppose — neutrons.
We wondered what the practical application of this might be. The paper suggests that the technique could send data through metal containment structures like those of a nuclear reactor or, perhaps, a spacecraft where you don’t want anything unnecessarily breaching the containment. After all, neutrons cut through things that would stop a conventional radio wave cold.
It seems like you only have to prove you can detect something to make this work — it really doesn’t matter what it is you are detecting. It seems like it would be much harder to do more advanced types of modulation using neutrons. Maybe this is why we don’t hear aliens. They are all Morse code operators with neutron-based telegraphs.
Controlling your computer with a wave of the hand seems like something from science fiction, and for good reason. From Minority Report to Iron Man, we’ve seen plenty of famous actors controlling their high-tech computer systems by wildly gesticulating in the air. Meanwhile, we’re all stuck using keyboards and mice like a bunch of chumps.
But it doesn’t have to be that way. As [Norbert Zare] demonstrates in his latest project, you can actually achieve some fairly impressive gesture control on your computer using a $10 USD PAJ7620U2 sensor. Well not just the sensor, of course. You need some way to convert the output from the I2C-enabled sensor into something your computer will understand, which is where the microcontroller comes in.
Looking through the provided source code, you can see just how easy it is to talk to the PAJ7620U2. With nothing more exotic than a switch case statement, [Norbert] is able to pick up on the gesture flags coming from the sensor. From there, it’s just a matter of using the Arduino Keyboard library to fire off the appropriate keycodes. If you’re looking to recreate this we’d go with a microcontroller that supports native USB, but technically this could be done on pretty much any Arduino. In fact, in this case he’s actually using the ATtiny85-based Digispark.
This actually isn’t the first time we’ve seen somebody use a similar sensor to pull off low-cost gesture control, but so far, none of these projects have really taken off. It seems like it works well enough in the video after the break, but looks can be deceiving. Have any Hackaday readers actually tried to use one of these modules for their day-to-day futuristic computing?
Ultraleap’s fifth-generation hand tracking platform, known as Gemini, is fully available on Windows. The most robust, flexible hand tracking ever, it’s already powering amazing experiences from Varjo, been integrated into Qualcomm’s Snapdragon XR2 platform, and is bringing touchless technology to self-service solutions around the world.
The Gemini Windows release is the first step in making the world’s best hand tracking easier to access and more flexible for multiple platforms, camera systems and third-party hardware.
Ultraleap have rebuilt their tracking engine from the ground up to be able to improve hand tracking across various aspects including:
Improved two-handed interaction
Faster initialization and hand detection
Improved robustness to challenging environmental conditions
Better adaptation to hand anatomy
Ultraleap have also made significant changes to the tracking platform to able to extend hand-tracking to different platforms and hardware. Varjo’s XR-3 and VR-3 headsets and Qualcomm’s XR2 chipset are two variations already announced, with more in the pipeline.
[…]
Meet Ultraleap Gemini – the best hand tracking ever. Fast initialization, interaction with two hands together, tracks diverse hand sizes, works in challenging environments.
Hand tracking will be to XR what touchscreens were to mobile
The Windows release is the first time full Gemini has been made available to all and the first full hand tracking release from the company in three years. Since a preview of the release went out earlier in the year, Ultraleap have been further refining the software ahead of full launch.
The OAK-D is an open-source, full-color depth sensing camera with embedded AI capabilities, and there is now a crowdfunding campaign for a newer, lighter version called the OAK-D Lite. The new model does everything the previous one could do, combining machine vision with stereo depth sensing and an ability to run highly complex image processing tasks all on-board, freeing the host from any of the overhead involved.
The OAK-D Lite camera is actually several elements together in one package: a full-color 4K camera, two greyscale cameras for stereo depth sensing, and onboard AI machine vision processing with Intel’s Movidius Myriad X processor. Tying it all together is an open-source software platform called DepthAI that wraps the camera’s functions and capabilities together into a unified whole.
The goal is to give embedded systems access to human-like visual perception in real-time, which at its core means detecting things, and identifying where they are in physical space. It does this with a combination of traditional machine vision functions (like edge detection and perspective correction), depth sensing, and the ability to plug in pre-trained convolutional neural network (CNN) models for complex tasks like object classification, pose estimation, or hand tracking in real-time.
So how is it used? Practically speaking, the OAK-D Lite is a USB device intended to be plugged into a host (running any OS), and the team has put a lot of work into making it as easy as possible. With the help of a downloadable application, the hardware can be up and running with examples in about half a minute. Integrating the device into other projects or products can be done in Python with the help of the DepthAI SDK, which provides functionality with minimal coding and configuration (and for more advanced users, there is also a full API for low-level access). Since the vision processing is all done on-board, even a Raspberry Pi Zero can be used effectively as a host.
There’s one more thing that improves the ease-of-use situation, and that’s the fact that support for the OAK-D Lite (as well as the previous OAK-D) has been added to a software suite called the Cortic Edge Platform (CEP). CEP is a block-based visual coding system that runs on a Raspberry Pi, and is aimed at anyone who wants to rapidly prototype with AI tools in a primarily visual interface, providing yet another way to glue a project together.
With the addition of features like a 120Hz display on some models, Apple’s iPhone 13 lineup is many ways a step above the phones the company shipped last year. But when it comes to the question of repairability, the story is more complicated. Conducting a teardown of the device, iFixit found it couldn’t get the iPhone 13’s Face ID feature to work if replaced the phone’s display. No matter what workaround it tried, iFixit could not get Face ID to work again. By its estimation, the display on the iPhone 13 lineup is serial-locked to the device. “Right now, if you replace your screen, Apple kills your Face ID, unless they control the repair,” the company warns.
While obviously not a good look for Apple, there may be a simple explanation for what’s happening. iFixit says it spoke to a licensed repair technician who said they were told by Apple support that the issue is a bug the company plans to fix in a future iOS release. We’ve reached out to Apple for more information. If it turns that limitation is not a mistake, it would be a brazen move on Apple’s part given that the FTC, at the behest of President Joe Biden, recently voted unanimously to tackle unlawful repair restrictions.