A pentad of bit boffins have devised a way to integrate electronic objects into augmented reality applications using their existing visible light sources, like power lights and signal strength indicators, to transmit data.
In a recent research paper, “LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces,” Carnegie Mellon computer scientists Karan Ahuja, Sujeath Pareddy, Robert Xiao, Mayank Goel, and Chris Harrison describe a technique for fetching data from device LEDs and then using those lights as anchor points for overlaid augmented reality graphics.
As depicted in a video published earlier this week on YouTube, LightAnchors allow an augmented reality scene, displayed on a mobile phone, to incorporate data derived from an LED embedded in the real-world object being shown on screen. You can see it here.
Unlike various visual tagging schemes that have been employed for this purpose, like using stickers or QR codes to hold information, LightAnchors rely on existing object features (device LEDs) and can be dynamic, reading live information from LED modulations.
The reason to do so is that device LEDs can serve not only as a point to affix AR interface elements, but also as an output port for the binary data being translated into human-readable form in the on-screen UI.
“Many devices such as routers, thermostats, security cameras already have LEDs that are addressable,” Karan Ahuja, a doctoral student at the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University told The Register.
“For devices such as glue guns and power strips, their LED can be co-opted with a very cheap micro-controller (less than US$1) to blink it at high frame rates.”
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft