Building on the technology that debuted with Kinect and became a core part of HoloLens, Project Kinect for Azure combines Microsoft’s next-generation depth camera with the power of Azure services to enable new scenarios for developers working with ambient intelligence. This technology will transform AI on the edge with spatial, human, and object understanding, increasing efficiency and unlocking new possibilities.
everage capabilities like spatial mapping, segmentation, and human and object recognition to enable:
But the process for redefining planet was deeply flawed and widely criticized even by those who accepted the outcome. At the 2006 IAU conference, which was held in Prague, the few scientists remaining at the very end of the week-long meeting (less than 4 percent of the world’s astronomers and even a smaller percentage of the world’s planetary scientists) ratified a hastily drawn definition that contains obvious flaws. For one thing, it defines a planet as an object orbiting around our sun – thereby disqualifying the planets around other stars, ignoring the exoplanet revolution, and decreeing that essentially all the planets in the universe are not, in fact, planets.
Even within our solar system, the IAU scientists defined “planet” in a strange way, declaring that if an orbiting world has “cleared its zone,” or thrown its weight around enough to eject all other nearby objects, it is a planet. Otherwise it is not. This criterion is imprecise and leaves many borderline cases, but what’s worse is that they chose a definition that discounts the actual physical properties of a potential planet, electing instead to define “planet” in terms of the other objects that are – or are not – orbiting nearby. This leads to many bizarre and absurd conclusions. For example, it would mean that Earth was not a planet for its first 500 million years of history, because it orbited among a swarm of debris until that time, and also that if you took Earth today and moved it somewhere else, say out to the asteroid belt, it would cease being a planet.
To add insult to injury, they amended their convoluted definition with the vindictive and linguistically paradoxical statement that “a dwarf planet is not a planet.” This seemingly served no purpose but to satisfy those motivated by a desire – for whatever reason – to ensure that Pluto was “demoted” by the new definition.
By and large, astronomers ignore the new definition of “planet” every time they discuss all of the exciting discoveries of planets orbiting other stars. And those of us who actually study planets for a living also discuss dwarf planets without adding an asterisk. But it gets old having to address the misconceptions among the public who think that because Pluto was “demoted” (not exactly a neutral term) that it must be more like a lumpy little asteroid than the complex and vibrant planet it is. It is this confusion among students and the public – fostered by journalists and textbook authors who mistakenly accepted the authority of the IAU as the final word – that makes this worth addressing.
Humble Monthly is a curated bundle of games sent to your inbox every month. Subscribe for $12/month to immediately unlock Destiny 2 ( MSRP: $59.99) with more to come! Build the ultimate game library. Every game is yours to keep. Cancel anytime.
Now that DeepMind has solved Go, the company is applying DeepMind to navigation. Navigation relies on knowing where you are in space relative to your surroundings and continually updating that knowledge as you move. DeepMind scientists trained neural networks to navigate like this in a square arena, mimicking the paths that foraging rats took as they explored the space. The networks got information about the rat’s speed, head direction, distance from the walls, and other details. To researchers’ surprise, the networks that learned to successfully navigate this space had developed a layer akin to grid cells. This was surprising because it is the exact same system that mammalian brains use to navigate.
A few different cell populations in our brains help us make our way through space. Place cells are so named because they fire when we pass through a particular place in our environment relative to familiar external objects. They are located in the hippocampus—a brain region responsible for memory formation and storage—and are thus thought to provide a cellular place for our memories. Grid cells got their name because they superimpose a hypothetical hexagonal grid upon our surroundings, as if the whole world were overlaid with vintage tiles from the floor of a New York City bathroom. They fire whenever we pass through a node on that grid.
More DeepMind experiments showed that only the neural networks that developed layers that “resembled grid cells, exhibiting significant hexagonal periodicity (gridness),” could navigate more complicated environments than the initial square arena, like setups with multiple rooms. And only these networks could adjust their routes based on changes in the environment, recognizing and using shortcuts to get to preassigned goals after previously closed doors were opened to them.
Implications
These results have a couple of interesting ramifications. One is the suggestion that grid cells are the optimal way to navigate. They didn’t have to emerge here—there was nothing dictating their formation—yet this computer system hit upon them as the best solution, just like our biological system did. Since the evolution of any system, cell type, or protein can proceed along multiple parallel paths, it is very much not a given that the system we end up with is in any way inevitable or optimized. This report seems to imply that, with grid cells, that might actually be the case.
Another implication is the support for the idea that grid cells function to impose a Euclidian framework upon our surroundings, allowing us to find and follow the most direct route to a (remembered) destination. This function had been posited since the discovery of grid cells in 2005, but it had not yet been proven empirically. DeepMind’s findings provide a biological bolster for the idea floated by Kant in the 18th century that our perception of place is an innate ability, independent of experience.
Ultimately, AI systems are only useful and safe as long as the goals they’ve learned actually mesh with what humans want them to do, and it can often be hard to know if they’ve subtly learned to solve the wrong problems or make bad decisions in certain conditions.
To make AI easier for humans to understand and trust, researchers at the nonprofit research organization OpenAI have proposed training algorithms to not only classify data or make decisions, but to justify their decisions in debates with other AI programs in front of a human or AI judge.
“Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information,” write OpenAI researchers Geoffrey Irving, Paul Christiano and Dario Amodei in a new research paper. The San Francisco-based AI lab is funded by Silicon Valley luminaries including Y Combinator President Sam Altman and Tesla CEO Elon Musk, with a goal of building safe, useful AI to benefit humanity.
Since human time is valuable and usually limited, the researchers say the AI systems can effectively train themselves in part by debating in front of an AI judge designed to mimic human decision making, similar to how software that plays games like Go or chess often trains in part by playing against itself.
In an experiment described in their paper, the researchers set up a debate where two software agents work with a standard set of handwritten numerals, attempting to convince an automated judge that a particular image is one digit rather than another digit, by taking turns revealing one pixel of the digit at a time. One bot is programmed to tell the truth, while another is programmed to lie about what number is in the image, and they reveal pixels to support their contentions that the digit is, say, a five rather than a six.
Microsoft’s computer vision API incorrectly determined this image contains sheep [Image: courtesy Janelle Shane / aiweirdness.com]
The truth-telling bots tend to reveal pixels from distinctive parts of the digit, like the horizontal line at the top of the numeral “5,” while the lying bots, in an attempt to deceive the judge, point out what amount to the most ambiguous areas, like the curve at the bottom of both a “5” and a “6.” The judge ultimately “guesses” which bot is telling the truth based on the pixels that have been revealed.The image classification task, where most of the image is invisible to the judge, is a sort of stand-in for complex problems where it wouldn’t be possible for a human judge to analyze the entire dataset to judge bot performance. The judge would have to rely on the facets of the data highlighted by debating robots, the researchers say.
“The goal here is to model situations where we have something that’s beyond human scale,” says Irving, a member of the AI safety team at OpenAI. “The best we can do there is replace something a human couldn’t possibly do with something a human can’t do because they’re not seeing an image.”
[…]
To test their hypothesis—that two debaters can lead to honest behavior even if the debaters know much more than the judge—the researchers have also devised an interactive demonstration of their approach, played entirely by humans and now available online. In the game, two human players are shown an image of either a dog or a cat and argue before a judge as to which species is represented. The contestants are allowed to highlight rectangular sections of the image to make their arguments—pointing out, for instance, a dog’s ears or cat’s paws—but the judge can “see” only the shapes and positions of the rectangles, not the actual image. While the honest player is required to tell the truth about what animal is shown, he or she is allowed to tell other lies in the course of the debate. “It is an interesting question whether lies by the honest player are useful,” the researchers write.
[…]
The researchers emphasize that it’s still early days, and the debate-based method still requires plenty of testing before AI developers will know exactly when it’s an effective strategy or how best to implement it. For instance, they may find that it may be better to use single judges or a panel of voting judges, or that some people are better equipped to judge certain debates.
It also remains to be seen whether humans will be accurate judges of sophisticated robots working on more sophisticated problems. People might be biased to rule in a certain way based on their own beliefs, and there could be problems that are hard to reduce enough to have a simple debate about, like the soundness of a mathematical proof, the researchers write.
Other less subtle errors may be easier to spot, like the sheep that Shane noticed had been erroneously labeled by Microsoft’s algorithms. “The agent would claim there’s sheep and point to the nonexistent sheep, and the human would say no,” Irving writes in an email to Fast Company.
But deceitful bots might also learn to appeal to human judges in sophisticated ways that don’t involve offering rigorous arguments, Shane suggested. “I wonder if we’d get kind of demagogue algorithms that would learn to exploit human emotions to argue their point,” she says.