EU sets up High-Level Group on Artificial Intelligence

Following an open selection process, the Commission has appointed 52 experts to a new High-Level Expert Group on Artificial Intelligence, comprising representatives from academia, civil society, as well as industry.

The High-Level Expert Group on Artificial Intelligence (AI HLG) will have as a general objective to support the implementation of the European strategy on AI. This will include the elaboration of recommendations on future AI-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.

Moreover, the AI HLG will serve as the steering group for the European AI Alliance’s work, interact with other initiatives, help stimulate a multi-stakeholder dialogue, gather participants’ views and reflect them in its analysis and reports.

In particular, the group will be tasked to:

  1. Advise the Commission on next steps addressing AI-related mid to long-term challenges and opportunities through recommendations which will feed into the policy development process, the legislative evaluation process and the development of a next-generation digital strategy.
  2. Propose to the Commission draft AI ethics guidelines, covering issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights, including privacy and personal data protection, dignity, consumer protection and non-discrimination
  3. Support the Commission on further engagement and outreach mechanisms to interact with a broader set of stakeholders in the context of the AI Alliance, share information and gather their input on the group’s and the Commission’s work.

Source: High-Level Group on Artificial Intelligence | Digital Single Market

Transforming Standard Video Into Slow Motion with AI

Researchers from NVIDIA developed a deep learning-based system that can produce high-quality slow-motion videos from a 30-frame-per-second video, outperforming various state-of-the-art methods that aim to do the same. The researchers will present their work at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah this week. 

“There are many memorable moments in your life that you might want to record with a camera in slow-motion because they are hard to see clearly with your eyes: the first time a baby walks, a difficult skateboard trick, a dog catching a ball,” the researchers wrote in the research paper.  “While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices,” the team explained.

With this new research, users can slow down their recordings after taking them.

Using NVIDIA Tesla V100 GPUs and cuDNN-accelerated PyTorch deep learning framework the team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.

The team used a separate dataset to validate the accuracy of their system.

The result can make videos shot at a lower frame rate look more fluid and less blurry.

“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers said. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”

To help demonstrate the research, the team took a series of clips from The Slow Mo Guys, a popular slow-motion based science and technology entertainment YouTube series created by Gavin Free, starring himself and his friend Daniel Gruchy, and made their videos even slower.

The method can take everyday videos of life’s most precious moments and slow them down to look like your favorite cinematic slow-motion scenes, adding suspense, emphasis, and anticipation.

Source: Transforming Standard Video Into Slow Motion with AI – NVIDIA Developer News CenterNVIDIA Developer News Center

A.I. Can Track Human Bodies Through Walls Now, With Just a Wifi Signal

A new piece of software has been trained to use wifi signals — which pass through walls, but bounce off living tissue — to monitor the movements, breathing, and heartbeats of humans on the other side of those walls. The researchers say this new tech’s promise lies in areas like remote healthcare, particularly elder care, but it’s hard to ignore slightly more dystopian applications.

[…]

“We actually are tracking 14 different joints on the body … the head, the neck, the shoulders, the elbows, the wrists, the hips, the knees, and the feet,” Katabi said. “So you can get the full stick-figure that is dynamically moving with the individuals that are obstructed from you — and that’s something new that was not possible before.”

RF-Pose A.I. using turning machine learning and a wifi signal into X-ray vision
An animation created by the RF-Pose software as it translates a wifi signal into a visual of human motion behind a wall.

The technology works a little bit like radar, but to teach their neural network how to interpret these granular bits of human activity, the team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) had to create two separate A.I.s: a student and a teacher.

[…]

the team developed one A.I. program that monitored human movements with a camera, on one side of a wall, and fed that information to their wifi X-ray A.I., called RF-Pose, as it struggled to make sense of the radio waves passing through that wall on the other side.

 

Source: A.I. Can Track Human Bodies Through Walls Now, With Just a Wifi Signal | Inverse

A machine has figured out Rubik’s Cube all by itself – using a reverse technique called autodictic iteration

In these scenarios, a deep-learning machine is given the rules of the game and then plays against itself. Crucially, it is rewarded at each step according to how it performs. This reward process is hugely important because it helps the machine to distinguish good play from bad play. In other words, it helps the machine learn.

But this doesn’t work in many real-world situations, because rewards are often rare or hard to determine.

For example, random turns of a Rubik’s Cube cannot easily be rewarded, since it is hard to judge whether the new configuration is any closer to a solution. And a sequence of random turns can go on for a long time without reaching a solution, so the end-state reward can only be offered rarely.

In chess, by contrast, there is a relatively large search space but each move can be evaluated and rewarded accordingly. That just isn’t the case for the Rubik’s Cube.

Enter Stephen McAleer and colleagues from the University of California, Irvine. These guys have pioneered a new kind of deep-learning technique, called “autodidactic iteration,” that can teach itself to solve a Rubik’s Cube with no human assistance. The trick that McAleer and co have mastered is to find a way for the machine to create its own system of rewards.

Here’s how it works. Given an unsolved cube, the machine must decide whether a specific move is an improvement on the existing configuration. To do this, it must be able to evaluate the move.

Autodidactic iteration does this by starting with the finished cube and working backwards to find a configuration that is similar to the proposed move. This process is not perfect, but deep learning helps the system figure out which moves are generally better than others.

Having been trained, the network then uses a standard search tree to hunt for suggested moves for each configuration.

The result is an algorithm that performs remarkably well. “Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves—less than or equal to solvers that employ human domain knowledge,” say McAleer and co.

That’s interesting because it has implications for a variety of other tasks that deep learning has struggled with, including puzzles like Sokoban, games like Montezuma’s Revenge, and problems like prime number factorization.

Indeed, McAleer and co have other goals in their sights: “We are working on extending this method to find approximate solutions to other combinatorial optimization problems such as prediction of protein tertiary structure.”

Source: A machine has figured out Rubik’s Cube all by itself – MIT Technology Review

Uh oh! Here’s yet more AI that creates creepy fake talking heads

Video Machine-learning experts have built a neural network that can manipulate facial movements in videos to create fake footage – in which people appear to say something they never actually said.

It could be used to create convincing yet faked announcements and confessions seemingly uttered by the rich and powerful as well as the average and mediocre, producing a new class of fake news and further separating us all from reality… if it works well enough, naturally.

It’s not quite like Deepfakes, which perversely superimposed the faces of famous actresses and models onto the bodies of raunchy X-rated movie stars.

Instead of mapping faces onto different bodies, though, this latest AI technology controls the target’s face, and manipulates it into copying the head movements and facial expressions of a source. In one of the examples, Barack Obama acts as the source and Vladimir Putin as the target. So it looks as though a speech given by Obama was instead given by Putin.

obama_putin_AI

Obama’s facial expressions are mapped onto Putin’s face using this latest AI technique … Image credit: Hyeongwoo Kim et al

A paper describing the technique, which popped up online at the end of last month, claims to produce realistic results. The method was developed by Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, and Christian Theobalt.

The Deepfakes Reddit forum, which has since been shut down, was flooded with people posting tragically bad computer-generated videos of celebs’ blurry and twitchy faces pasted onto porno babes using machine-learning software, with mismatched eyebrows and skittish movements. You could, after a few seconds, tell they were bogus, basically.

A previous similar project created a video of someone pretending to say something he or she hadn’t through lip-synching and an audio clip. Again, the researchers used Barack Obama as an example. But the results weren’t completely convincing since the lip movements didn’t always align properly.

That’s less of a problem with this new approach, however. It’s, supposedly, the first model that can transfer the full three-dimensional head position, head rotation, face expression, eye gaze and blinking from a source onto a portrait video of a target, according to the paper.

Controlling the target head

It uses a series of landmarks to reconstruct a face so it can track the head and facial movements to capture facial expressions for the input source video and output target video for every frame. A facial representation method computes the parameters of the face for both videos.

Next, these parameters are slightly modified and copied from the source to the target face for a realistic mapping. Synthetic images of the target’s face are rendered using an Nvidia GeForce GTX Titan X GPU.

The rendering part is where the generative adversarial network comes in. The training data comes from the tracked video frames of the target video sequence. The goal is to generate fake images that are as good as enough as the ones in the target video frames to trick a discriminator network.

Only about two thousand frames – which amounts to a minute of footage – is enough to train the network. At the moment, it’s only the facial expressions that can be modified realistically. It doesn’t copy the upper body, and cannot deal with backgrounds that change too much.

Source: Uh oh! Here’s yet more AI that creates creepy fake talking heads • The Register

AI learns to copy human gaming behaviour by watching Youtube

Deep reinforcement learning methods traditionally struggle with tasks where environment rewards are particularly sparse. One successful method of guiding exploration in these domains is to imitate trajectories provided by a human demonstrator. However, these demonstrations are typically collected under artificial conditions, i.e. with access to the agent’s exact environment setup and the demonstrator’s action and reward trajectories. Here we propose a two-stage method that overcomes these limitations by relying on noisy, unaligned footage without access to such data. First, we learn to map unaligned videos from multiple sources to a common representation using self-supervised objectives constructed over both time and modality (i.e. vision and sound). Second, we embed a single YouTube video in this representation to construct a reward function that encourages an agent to imitate human gameplay, this has been a boom in society, and there are more and more games to be improved with this,and it’s more popular now between adults than kids, you can look here to see how to get gaming services. This method of one-shot imitation allows our agent to convincingly exceed human-level performance on the infamously hard exploration games Montezuma’s Revenge, Pitfall! and Private Eye for the first time, even if the agent is not presented with any environment rewards.

Source: [1805.11592] Playing hard exploration games by watching YouTube

AI better than dermatologists at detecting skin cancer, study finds

or the first time, new research suggests artificial intelligence may be better than highly-trained humans at detecting skin cancer. A study conducted by an international team of researchers pitted experienced dermatologists against a machine learning system, known as a deep learning convolutional neural network, or CNN, to see which was more effective at detecting malignant melanomas.

[…]

Fifty-eight dermatologists from 17 countries around the world participated in the study. More than half of the doctors were considered expert level with more than five years’ experience. Nineteen percent said they had between two to five years’ experience, and 29 percent had less than two years’ experience.

[…]

At first look, dermatologists correctly detected an average of 87 percent of melanomas, and accurately identified an average of 73 percent of lesions that were not malignant. Conversely, the CNN correctly detected 95 percent of melanomas.

Things improved a bit for the dermatologists when they were given additional information about the patients along with the photos; then they accurately diagnosed 89 percent of malignant melanomas and 76 percent of benign moles. Still, they were outperformed by the artificial intelligence system, which was working solely from the images.

“The CNN missed fewer melanomas, meaning it had a higher sensitivity than the dermatologists, and it misdiagnosed fewer benign moles as malignant melanoma, which means it had a higher specificity; this would result in less unnecessary surgery,” study author Professor Holger Haenssle, senior managing physician in the Department of Dermatology at the University of Heidelberg in Germany, said in a statement.

The expert dermatologists performed better in the initial round of diagnoses than the less-experienced doctors at identifying malignant melanomas. But their average of correct diagnoses was still worse than the AI system’s.

Source: AI better than dermatologists at detecting skin cancer, study finds – CBS News

AI can tell who you are by your gait using only floor sensors

Human footsteps can provide a unique behavioural pattern for robust biometric systems. We propose spatio-temporal footstep representations from floor-only sensor data in advanced computational models for automatic biometric verification. Our models deliver an artificial intelligence capable of effectively differentiating the fine-grained variability of footsteps between legitimate users (clients) and impostor users of the biometric system. The methodology is validated in the largest to date footstep database, containing nearly 20,000 footstep signals from more than 120 users. The database is organized by considering a large cohort of impostors and a small set of clients to verify the reliability of biometric systems. We provide experimental results in 3 critical data-driven security scenarios, according to the amount of footstep data made available for model training: at airports security checkpoints (smallest training set), workspace environments (medium training set) and home environments (largest training set). We report state-of-the-art footstep recognition rates with an optimal equal false acceptance and false rejection rate of 0.7% (equal error rate), an improvement ratio of 371% from previous state-of-the-art. We perform a feature analysis of deep residual neural networks showing effective clustering of client’s footstep data and provide insights of the feature learning process.

Source: Analysis of Spatio-temporal Representations for Robust Footstep Recognition with Deep Residual Neural Networks – IEEE Journals & Magazine

Robots fight weeds in challenge to agrochemical giants

In a field of sugar beet in Switzerland, a solar-powered robot that looks like a table on wheels scans the rows of crops with its camera, identifies weeds and zaps them with jets of blue liquid from its mechanical tentacles.

Undergoing final tests before the liquid is replaced with weedkiller, the Swiss robot is one of new breed of AI weeders that investors say could disrupt the $100 billion pesticides and seeds industry by reducing the need for universal herbicides and the genetically modified (GM) crops that tolerate them.

[…]

While still in its infancy, the plant-by-plant approach heralds a marked shift from standard methods of crop production.

Now, non-selective weedkillers such as Monsanto’s Roundup are sprayed on vast tracts of land planted with tolerant GM seeds, driving one of the most lucrative business models in the industry.

‘SEE AND SPRAY’

But ecoRobotix www.ecorobotix.com/en, developer of the Swiss weeder, believes its design could reduce the amount of herbicide farmers use by 20 times. The company said it is close to signing a financing round with investors and is due to go on the market by early 2019.

Blue River, a Silicon Valley startup bought by U.S. tractor company Deere & Co. for $305 million last year, has also developed a machine using on-board cameras to distinguish weeds from crops and only squirt herbicides where necessary.

Its “See and Spray” weed control machine, which has been tested in U.S. cotton fields, is towed by a tractor and the developers estimate it could cut herbicide use by 90 percent once crops have started growing.

German engineering company Robert Bosch here is also working on similar precision spraying kits as are other startups such as Denmark’s Agrointelli here

ROBO Global www.roboglobal.com/about-us, an advisory firm that runs a robotics and automation investment index tracked by funds worth a combined $4 billion, believes plant-by-plant precision spraying will only gain in importance.

“A lot of the technology is already available. It’s just a question of packaging it together at the right cost for the farmers,” said Richard Lightbound, Robo’s CEO for Europe, the Middle East and Africa.

“If you can reduce herbicides by the factor of 10 it becomes very compelling for the farmer in terms of productivity. It’s also eco friendly and that’s clearly going to be very popular, if not compulsory, at some stage,” he said.

 

Source: Robots fight weeds in challenge to agrochemical giants | Reuters

Using generative models to make dental crowns better than humans can

Computer vision has advanced significantly that many discriminative approaches such as object recognition are now widely used in real applications. We present another exciting development that utilizes generative models for the mass customization of medical products such as dental crowns. In the dental industry, it takes a technician years of training to design synthetic crowns that restore the function and integrity of missing teeth. Each crown must be customized to individual patients, and it requires human expertise in a time-consuming and labor-intensive process, even with computer-assisted design software. We develop a fully automatic approach that learns not only from human designs of dental crowns, but also from natural spatial profiles between opposing teeth. The latter is hard to account for by technicians but important for proper biting and chewing functions. Built upon a Generative Adversar-ial Network architecture (GAN), our deep learning model predicts the customized crown-filled depth scan from the crown-missing depth scan and opposing depth scan. We propose to incorporate additional space constraints and statistical compatibility into learning. Our automatic designs exceed human technicians’ standards for good morphology and functionality, and our algorithm is being tested for production use.

Source: [1804.00064] Learning Beyond Human Expertise with Generative Models for Dental Restorations

New Artificial Intelligence Beats Tactical Experts in Aerial Combat Simulation

ALPHA is currently viewed as a research tool for manned and unmanned teaming in a simulation environment. In its earliest iterations, ALPHA consistently outperformed a baseline computer program previously used by the Air Force Research Lab for research.  In other words, it defeated other AI opponents.

In fact, it was only after early iterations of ALPHA bested other computer program opponents that Lee then took to manual controls against a more mature version of ALPHA last October. Not only was Lee not able to score a kill against ALPHA after repeated attempts, he was shot out of the air every time during protracted engagements in the simulator.

Since that first human vs. ALPHA encounter in the simulator, this AI has repeatedly bested other experts as well, and is even able to win out against these human experts when its (the ALPHA-controlled) aircraft are deliberately handicapped in terms of speed, turning, missile capability and sensors.

Lee, who has been flying in simulators against AI opponents since the early 1980s, said of that first encounter against ALPHA, “I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed.”

He added that with most AIs, “an experienced pilot can beat up on it (the AI) if you know what you’re doing. Sure, you might have gotten shot down once in a while by an AI program when you, as a pilot, were trying something new, but, until now, an AI opponent simply could not keep up with anything like the real pressure and pace of combat-like scenarios.”

[…]

Eventually, ALPHA aims to lessen the likelihood of mistakes since its operations already occur significantly faster than do those of other language-based consumer product programming. In fact, ALPHA can take in the entirety of sensor data, organize it, create a complete mapping of a combat scenario and make or change combat decisions for a flight of four fighter aircraft in less than a millisecond. Basically, the AI is so fast that it could consider and coordinate the best tactical plan and precise responses, within a dynamic environment, over 250 times faster than ALPHA’s human opponents could blink.

[…]

It would normally be expected that an artificial intelligence with the learning and performance capabilities of ALPHA, applicable to incredibly complex problems, would require a super computer in order to operate.

However, ALPHA and its algorithms require no more than the computing power available in a low-budget PC in order to run in real time and quickly react and respond to uncertainty and random events or scenarios.

[…]

To reach its current performance level, ALPHA’s training has occurred on a $500 consumer-grade PC. This training process started with numerous and random versions of ALPHA. These automatically generated versions of ALPHA proved themselves against a manually tuned version of ALPHA. The successful strings of code are then “bred” with each other, favoring the stronger, or highest performance versions. In other words, only the best-performing code is used in subsequent generations. Eventually, one version of ALPHA rises to the top in terms of performance, and that’s the one that is utilized.

[…]

ALPHA is developed by Psibernetix Inc., serving as a contractor to the United States Air Force Research Laboratory.

Support for Ernest’s doctoral research, $200,000 in total, was provided over three years by the Dayton Area Graduate Studies Institute and the U.S. Air Force Research Laboratory.

Source: New Artificial Intelligence Beats Tactical Experts in Combat Simulation, University of Cincinnati

Human-Machine Teaming Joint Concept Note by UK MoD

Joint Concept Note (JCN) 1/18, Human-Machine Teaming articulates the challenges and opportunities that robotic and artificial intelligence (AI) technologies offer, and identifies how we achieve military advantage through human-machine teams. Its purpose is to guide coherent future force development and help frame defence strategy and policy.

The JCN examines:

  • economic and technological trends and the likely impacts of AI and robotic systems on defence
  • potential evolutionary paths that robotic and AI systems may take in conflict
  • the effects of AI and robotics development on conflict across the observe, orient, decide and act (OODA) loop
  • why optimised human-machine teams will be essential to developing military advantage

JCN 1/18 should be read by everyone who needs to understand how AI, robotics and data can change the future character of conflict, for us and our adversaries.

Source: Human-Machine Teaming (JCN 1/18) – GOV.UK

Flagship AI Lab announced as Defence Secretary hosts first meet between British and American defence innovators

As part of the MOD’s commitment to pursue and deliver future capabilities, the Defence Secretary announced the launch of AI Lab – a single flagship for Artificial Intelligence, machine learning and data science in defence based at Dstl in Porton Down. AI Lab will enhance and accelerate the UK’s world-class capability in the application of AI-related technologies to defence and security challenges. Dstl currently delivers more than £20 million of research related to AI and this is forecast to grow significantly.

AI Lab will engage in high-level research on areas from autonomous vehicles to intelligent systems; from countering fake news to using information to deter and de-escalate conflicts; and from enhanced computer network defences to improved decision aids for commanders. AI Lab provides tremendous opportunities to help keep the British public safe from a range of defence and security threats. This new creation will help Dstl contribute more fully to this vital challenge.

Source: Flagship AI Lab announced as Defence Secretary hosts first meet between British and American defence innovators

Boffins build smallest drone to fly itself with AI

A team of computer scientists have built the smallest completely autonomous nano-drone that can control itself without the need for a human guidance.

Although computer vision has improved rapidly thanks to machine learning and AI, it remains difficult to deploy algorithms on devices like drones due to memory, bandwidth and power constraints.

But researchers from ETH Zurich, Switzerland and the University of Bologna, Italy have managed to build a hand-sized drone that can fly autonomously and consumes only about 94 milliWatts (0.094 W) of energy. Their efforts were published in a paper on arXiv earlier this month.

At the heart of it all is DroNet, a convolutional neural network that processes incoming images from a camera at 20 frames per second. It works out the steering angle, so that it can control the direction of the drone, and the probability of a collision, so that it know whether to keep going or stop. Training was conducted using thousands of images taken from bicycles and cars driving along different roads and streets.

[…]

But it suffers from some of the same setbacks as the older model. Since it was trained with images from a single plane, the drone can only move horizontally and cannot fly up or down.

Autonomous drones are desirable because if we’re going to use drones to do things like deliver packages, it would be grand if they could avoid obstacles instead of flying on known-safe routes. Autonomy will also help drones to monitor environments, spy on people and develop swarm intelligence for military use.

Source: Boffins build smallest drone to fly itself with AI • The Register

AI trained to navigate develops brain-like location tracking

Now that DeepMind has solved Go, the company is applying DeepMind to navigation. Navigation relies on knowing where you are in space relative to your surroundings and continually updating that knowledge as you move. DeepMind scientists trained neural networks to navigate like this in a square arena, mimicking the paths that foraging rats took as they explored the space. The networks got information about the rat’s speed, head direction, distance from the walls, and other details. To researchers’ surprise, the networks that learned to successfully navigate this space had developed a layer akin to grid cells. This was surprising because it is the exact same system that mammalian brains use to navigate.

A few different cell populations in our brains help us make our way through space. Place cells are so named because they fire when we pass through a particular place in our environment relative to familiar external objects. They are located in the hippocampus—a brain region responsible for memory formation and storage—and are thus thought to provide a cellular place for our memories. Grid cells got their name because they superimpose a hypothetical hexagonal grid upon our surroundings, as if the whole world were overlaid with vintage tiles from the floor of a New York City bathroom. They fire whenever we pass through a node on that grid.

More DeepMind experiments showed that only the neural networks that developed layers that “resembled grid cells, exhibiting significant hexagonal periodicity (gridness),” could navigate more complicated environments than the initial square arena, like setups with multiple rooms. And only these networks could adjust their routes based on changes in the environment, recognizing and using shortcuts to get to preassigned goals after previously closed doors were opened to them.

Implications

These results have a couple of interesting ramifications. One is the suggestion that grid cells are the optimal way to navigate. They didn’t have to emerge here—there was nothing dictating their formation—yet this computer system hit upon them as the best solution, just like our biological system did. Since the evolution of any system, cell type, or protein can proceed along multiple parallel paths, it is very much not a given that the system we end up with is in any way inevitable or optimized. This report seems to imply that, with grid cells, that might actually be the case.

Another implication is the support for the idea that grid cells function to impose a Euclidian framework upon our surroundings, allowing us to find and follow the most direct route to a (remembered) destination. This function had been posited since the discovery of grid cells in 2005, but it had not yet been proven empirically. DeepMind’s findings provide a biological bolster for the idea floated by Kant in the 18th century that our perception of place is an innate ability, independent of experience.

Source: AI trained to navigate develops brain-like location tracking | Ars Technica

Why Scientists Think AI Systems Should Debate Each Other

Ultimately, AI systems are only useful and safe as long as the goals they’ve learned actually mesh with what humans want them to do, and it can often be hard to know if they’ve subtly learned to solve the wrong problems or make bad decisions in certain conditions.

To make AI easier for humans to understand and trust, researchers at the nonprofit research organization OpenAI have proposed training algorithms to not only classify data or make decisions, but to justify their decisions in debates with other AI programs in front of a human or AI judge.

“Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information,” write OpenAI researchers Geoffrey Irving, Paul Christiano and Dario Amodei in a new research paper. The San Francisco-based AI lab is funded by Silicon Valley luminaries including Y Combinator President Sam Altman and Tesla CEO Elon Musk, with a goal of building safe, useful AI to benefit humanity.

Since human time is valuable and usually limited, the researchers say the AI systems can effectively train themselves in part by debating in front of an AI judge designed to mimic human decision making, similar to how software that plays games like Go or chess often trains in part by playing against itself.

In an experiment described in their paper, the researchers set up a debate where two software agents work with a standard set of handwritten numerals, attempting to convince an automated judge that a particular image is one digit rather than another digit, by taking turns revealing one pixel of the digit at a time. One bot is programmed to tell the truth, while another is programmed to lie about what number is in the image, and they reveal pixels to support their contentions that the digit is, say, a five rather than a six.

Microsoft’s computer vision API incorrectly determined this image contains sheep [Image: courtesy Janelle Shane / aiweirdness.com]

The truth-telling bots tend to reveal pixels from distinctive parts of the digit, like the horizontal line at the top of the numeral “5,” while the lying bots, in an attempt to deceive the judge, point out what amount to the most ambiguous areas, like the curve at the bottom of both a “5” and a “6.” The judge ultimately “guesses” which bot is telling the truth based on the pixels that have been revealed.The image classification task, where most of the image is invisible to the judge, is a sort of stand-in for complex problems where it wouldn’t be possible for a human judge to analyze the entire dataset to judge bot performance. The judge would have to rely on the facets of the data highlighted by debating robots, the researchers say.

“The goal here is to model situations where we have something that’s beyond human scale,” says Irving, a member of the AI safety team at OpenAI. “The best we can do there is replace something a human couldn’t possibly do with something a human can’t do because they’re not seeing an image.”

[…]

To test their hypothesis—that two debaters can lead to honest behavior even if the debaters know much more than the judge—the researchers have also devised an interactive demonstration of their approach, played entirely by humans and now available online. In the game, two human players are shown an image of either a dog or a cat and argue before a judge as to which species is represented. The contestants are allowed to highlight rectangular sections of the image to make their arguments—pointing out, for instance, a dog’s ears or cat’s paws—but the judge can “see” only the shapes and positions of the rectangles, not the actual image. While the honest player is required to tell the truth about what animal is shown, he or she is allowed to tell other lies in the course of the debate. “It is an interesting question whether lies by the honest player are useful,” the researchers write.

[…]

The researchers emphasize that it’s still early days, and the debate-based method still requires plenty of testing before AI developers will know exactly when it’s an effective strategy or how best to implement it. For instance, they may find that it may be better to use single judges or a panel of voting judges, or that some people are better equipped to judge certain debates.

It also remains to be seen whether humans will be accurate judges of sophisticated robots working on more sophisticated problems. People might be biased to rule in a certain way based on their own beliefs, and there could be problems that are hard to reduce enough to have a simple debate about, like the soundness of a mathematical proof, the researchers write.

Other less subtle errors may be easier to spot, like the sheep that Shane noticed had been erroneously labeled by Microsoft’s algorithms. “The agent would claim there’s sheep and point to the nonexistent sheep, and the human would say no,” Irving writes in an email to Fast Company.

But deceitful bots might also learn to appeal to human judges in sophisticated ways that don’t involve offering rigorous arguments, Shane suggested. “I wonder if we’d get kind of demagogue algorithms that would learn to exploit human emotions to argue their point,” she says.

Source: Why Scientists Think AI Systems Should Debate Each Other

Infosec brainiacs release public dataset to classify new malware using AI

Researchers at Endgame, a cyber-security biz based in Virginia, have published what they believe is the first large open-source dataset for machine learning malware detection known as EMBER.

EMBER contains metadata describing 1.1 million Windows portable executable files: 900,000 training samples evenly split into malicious, benign, and unlabeled categories and 200,000 files of test samples labelled as malicious and benign.

“We’re trying to push the dark arts of infosec research into an open light. EMBER will make AI research more transparent and reproducible,” Hyrum Anderson, co-author of the study to be presented at the RSA conference this week in San Francisco, told The Register.

Progress in AI is driven by data. Researchers compete with one another by building models and training them on benchmark datasets to reach ever increasing accuracies.

Computer vision is flooded with numerous datasets containing millions of annotated pictures for image recognition tasks, and natural language processing has various text-based datasets to test machine reading and comprehension skills. this has helped a lot in building AI image processing.

Although there is a strong interest in using AI for information security – look at DARPA’s Cyber Grand Challenge where academics developed software capable of hunting for security bugs autonomously – it’s an area that doesn’t really have any public datasets.

Source: Infosec brainiacs release public dataset to classify new malware using AI • The Register

Artificial intelligence can scour code to find accidentally public passwords

researchers at software infrastructure firm Pivotal have taught AI to locate this accidentally public sensitive information in a surprising way: By looking at the code as if it were a picture. Since modern artificial intelligence is arguably better than humans at identifying minute differences in images, telling the difference between a password and normal code for a computer is just like recognizing a dog from a cat.

The best way to check whether private passwords or sensitive information has been left public today is to use hand-coded rules called “regular expressions.” These rules tell a computer to find any string of characters that meets specific criteria, like length and included characters. But passwords are all different, and this method means that the security engineer has to anticipate every kind of private data they want to guard against.

To automate the process, the Pivotal team first turned the text of passwords and code into matrixes, or lists of numbers describing each string of characters. This is the same process used when AI interprets images—similar to how the images reflected into our eyes are turned into electrical signals for the brain, images and text need to be in a simpler form for computers to process.

When the team visualized the matrices, private data looked different from the standard code. Since passwords or keys are often randomized strings of numbers, letters, and symbols—called “high entropy”—they stand out against non-random strings of letters.

Below you can see a GIF of the matrix with 100 characters of simulated secret information.

A matrix with confidential information.
A matrix with confidential information.

And then here’s another with 100 normal, non-secret code:

Pixel-Art-NO-Secret
(Pivotal)

The two patterns are completely different, with patches of higher-entropy appearing lighter in the top example of “secret” data.

Pivotal then trained a deep learning algorithm typically used for images on the matrixes, and, according to Pivotal chief security officer Justin Smith, the end result performed better than the regular expressions the firm typically uses.

Source: Artificial intelligence can scour code to find accidentally public passwords — Quartz

This AI-Controlled Roach Breeding Site Is a Nightmare Factory

In the city of Xichang, located in the southwestern Sichuan province, there is a massive, artificial intelligence-powered roach breeding farm that is producing more than six billion cockroaches per year.

The facility, which is described by the South China Morning Post as a multi-story building about the size of two sports fields, is being operated by Chengdu-based medicine maker Gooddoctor Pharmaceutical Group. Its existence raises a number of questions like, “Oh god, why?” and “Who asked for this monstrosity?”

Inside the breeding site, the environment is described as “warm, humid, and dark” all-year round. The layout is wide open, allowing the roaches to roam around freely, find food and water, and reproduce whenever and wherever the right mood strikes.

The insect sex pit is managed by what the South China Morning Post describes as a “smart manufacturing system” that is controlled primarily by algorithms. The system is in charge of analyzing more than 80 categories of data collected from throughout the facility. Everything from the temperature to the level of food consumption is monitored by AI, which is programmed to learn from historical data to determine the best conditions for peak roach fornication.

The billions of roaches that pass through the facility each year never get to see the light of day. From their birth inside the building until their death months or years later, they are locked within the walls of the moist coitus cabin.

Each and every one of the insects is eventually fed into machines and crushed up to be used in a “healing potion” manufactured by the pharmaceutical company responsible for the facility.

The potion—which is described as having a tea-like color, a slightly sweet taste, and a fishy smell—sells for about $8 for two 100ml bottles. While it is used primarily as a fix for stomach issues, the medicine can be prescribed by doctors for just about anything.

Source: This AI-Controlled Roach Breeding Site Is a Nightmare Factory

‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale

the workers wear caps to monitor their brainwaves, data that management then uses to adjust the pace of production and redesign workflows, according to the company.

The company said it could increase the overall efficiency of the workers by manipulating the frequency and length of break times to reduce mental stress.

Hangzhou Zhongheng Electric is just one example of the large-scale application of brain surveillance devices to monitor people’s emotions and other mental activities in the workplace, according to scientists and companies involved in the government-backed projects.

Concealed in regular safety helmets or uniform hats, these lightweight, wireless sensors constantly monitor the wearer’s brainwaves and stream the data to computers that use artificial intelligence algorithms to detect emotional spikes such as depression, anxiety or rage.

The technology is in widespread use around the world but China has applied it on an unprecedented scale in factories, public transport, state-owned companies and the military to increase the competitiveness of its manufacturing industry and to maintain social stability.

It has also raised concerns about the need for regulation to prevent abuses in the workplace.

The technology is also in use at in Hangzhou at State Grid Zhejiang Electric Power, where it has boosted company profits by about 2 billion yuan (US$315 million) since it was rolled out in 2014, according to Cheng Jingzhou, an official overseeing the company’s emotional surveillance programme.

“There is no doubt about its effect,” Cheng said.

Source: ‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale | South China Morning Post

Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

The gambling industry is increasingly using artificial intelligence to predict consumer habits and personalise promotions to keep gamblers hooked, industry insiders have revealed.Current and former gambling industry employees have described how people’s betting habits are scrutinised and modelled to manipulate their future behaviour.“The industry is using AI to profile customers and predict their behaviour in frightening new ways,” said Asif, a digital marketer who previously worked for a gambling company. “Every click is scrutinised in order to optimise profit, not to enhance a user’s experience.”“I’ve often heard people wonder about how they are targeted so accurately and it’s no wonder because its all hidden in the small print.”Publicly, gambling executives boast of increasingly sophisticated advertising keeping people betting, while privately conceding that some are more susceptible to gambling addiction when bombarded with these type of bespoke ads and incentives.Gamblers’ every click, page view and transaction is scientifically examined so that ads statistically more likely to work can be pushed through Google, Facebook and other platforms.

[…]

Last August, the Guardian revealed the gambling industry uses third-party companies to harvest people’s data, helping bookmakers and online casinos target people on low incomes and those who have stopped gambling.

Despite condemnation from MPs, experts and campaigners, such practices remain an industry norm.

“You can buy email lists with more than 100,000 people’s emails and phone numbers from data warehouses who regularly sell data to help market gambling promotions,” said Brian. “They say it’s all opted in but people haven’t opted in at all.”

In this way, among others, gambling companies and advertisers create detailed customer profiles including masses of information about their interests, earnings, personal details and credit history.

[…]

Elsewhere, there are plans to geolocate customers in order to identify when they arrive at stadiums so they can prompted via texts to bet on the game they are about to watch.

The gambling industry earned£14bn in 2016, £4.5bn of which from online betting, and it is pumping some of that money into making its products more sophisticated and, in effect, addictive.

Source: Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

Europe divided over robot ‘personhood’

While autonomous robots with humanlike, all-encompassing capabilities are still decades away, European lawmakers, legal experts and manufacturers are already locked in a high-stakes debate about their legal status: whether it’s these machines or human beings who should bear ultimate responsibility for their actions.

The battle goes back to a paragraph of text, buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted “electronic personalities.” Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

Those pushing for such a legal change, including some manufacturers and their affiliates, say the proposal is common sense. Legal personhood would not make robots virtual people who can get married and benefit from human rights, they say; it would merely put them on par with corporations, which already have status as “legal persons,” and are treated as such by courts around the world.

Source: Europe divided over robot ‘personhood’ – POLITICO

Google uses AI to seperate out audio from a single person in a high noise rate video

People are remarkably good at focusing their attention on a particular person in a noisy environment, mentally “muting” all other voices and sounds. Known as the cocktail party effect, this capability comes natural to us humans. However, automatic speech separation — separating an audio signal into its individual speech sources — while a well-studied problem, remains a significant challenge for computers. In “Looking to Listen at the Cocktail Party”, we present a deep learning audio-visual model for isolating a single speech signal from a mixture of sounds such as other voices and background noise. In this work, we are able to computationally produce videos in which speech of specific people is enhanced while all other sounds are suppressed. Our method works on ordinary videos with a single audio track, and all that is required from the user is to select the face of the person in the video they want to hear, or to have such a person be selected algorithmically based on context. We believe this capability can have a wide range of applications, from speech enhancement and recognition in videos, through video conferencing, to improved hearing aids, especially in situations where there are multiple people speaking.

A unique aspect of our technique is in combining both the auditory and visual signals of an input video to separate the speech. Intuitively, movements of a person’s mouth, for example, should correlate with the sounds produced as that person is speaking, which in turn can help identify which parts of the audio correspond to that person. The visual signal not only improves the speech separation quality significantly in cases of mixed speech (compared to speech separation using audio alone, as we demonstrate in our paper), but, importantly, it also associates the separated, clean speech tracks with the visible speakers in the video.

The input to our method is a video with one or more people speaking, where the speech of interest is interfered by other speakers and/or background noise. The output is a decomposition of the input audio track into clean speech tracks, one for each person detected in the video.

An Audio-Visual Speech Separation Model To generate training examples, we started by gathering a large collection of 100,000 high-quality videos of lectures and talks from YouTube. From these videos, we extracted segments with a clean speech (e.g. no mixed music, audience sounds or other speakers) and with a single speaker visible in the video frames. This resulted in roughly 2000 hours of video clips, each of a single person visible to the camera and talking with no background interference. We then used this clean data to generate “synthetic cocktail parties” — mixtures of face videos and their corresponding speech from separate video sources, along with non-speech background noise we obtained from AudioSet. Using this data, we were able to train a multi-stream convolutional neural network-based model to split the synthetic cocktail mixture into separate audio streams for each speaker in the video. The input to the network are visual features extracted from the face thumbnails of detected speakers in each frame, and a spectrogram representation of the video’s soundtrack. During training, the network learns (separate) encodings for the visual and auditory signals, then it fuses them together to form a joint audio-visual representation. With that joint representation, the network learns to output a time-frequency mask for each speaker. The output masks are multiplied by the noisy input spectrogram and converted back to a time-domain waveform to obtain an isolated, clean speech signal for each speaker. For full details, see our paper.

Our multi-stream, neural network-based model architecture.

Here are some more speech separation and enhancement results by our method, playing first the input video with mixed or noisy speech, then our results. Sound by others than the selected speakers can be entirely suppressed or suppressed to the desired level.

Application to Speech Recognition Our method can also potentially be used as a pre-process for speech recognition and automatic video captioning. Handling overlapping speakers is a known challenge for automatic captioning systems, and separating the audio to the different sources could help in presenting more accurate and easy-to-read captions.

You can similarly see and compare the captions before and after speech separation in all the other videos in this post and on our website, by turning on closed captions in the YouTube player when playing the videos (“cc” button at the lower right corner of the player). On our project web page you can find more results, as well as comparisons with state-of-the-art audio-only speech separation and with other recent audio-visual speech separation work. Indeed, with recent advances in deep learning, there is a clear growing interest in the academic community in audio-visual analysis. For example, independently and concurrently to our work, this work from UC Berkeley explored a self-supervised approach for separating speech of on/off-screen speakers, and this work from MIT addressed the problem of separating the sound of multiple on-screen objects (e.g., musical instruments), while locating the image regions from which the sound originates. We envision a wide range of applications for this technology. We are currently exploring opportunities for incorporating it into various Google products. Stay tuned!

Source: Research Blog: Looking to Listen: Audio-Visual Speech Separation

Watch artificial intelligence create a 3D model of a person—from just a few seconds of video

Transporting yourself into a video game, body and all, just got easier. Artificial intelligence has been used to create 3D models of people’s bodies for virtual reality avatars, surveillance, visualizing fashion, or movies. But it typically requires special camera equipment to detect depth or to view someone from multiple angles. A new algorithm creates 3D models using standard video footage from one angle.

The system has three stages. First, it analyzes a video a few seconds long of someone moving—preferably turning 360° to show all sides—and for each frame creates a silhouette separating the person from the background. Based on machine learning techniques—in which computers learn a task from many examples—it roughly estimates the 3D body shape and location of joints. In the second stage, it “unposes” the virtual human created from each frame, making them all stand with arms out in a T shape, and combines information about the T-posed people into one, more accurate model. Finally, in the third stage, it applies color and texture to the model based on recorded hair, clothing, and skin.

The researchers tested the method with a variety of body shapes, clothing, and backgrounds and found that it had an average accuracy within 5 millimeters, they will report in June at the Computer Vision and Pattern Recognition conference in Salt Lake City. The system can also reproduce the folding and wrinkles of fabric, but it struggles with skirts and long hair. With a model of you, the researchers can change your weight, clothing, and pose—and even make you perform a perfect pirouette. No practice necessary.

Source: Watch artificial intelligence create a 3D model of a person—from just a few seconds of video | Science | AAAS

This AI Can Automatically Animate New Flintstones Cartoons

Researchers have successfully trained artificial intelligence to generate new clips of the prehistoric animated series based on nothing but random text descriptions of what’s happening in a scene.

A team of researchers from the Allen Institute for Artificial Intelligence, and the University of Illinois Urbana-Champaign, trained an AI by feeding it over 25,000 three-second clips of the cartoon, which hasn’t seen any new episodes in over 50 years. Most AI experiments as of late have involved generating freaky images based on what was learned, but this time the researchers included detailed descriptions and annotations of what appeared, and what was happening, in every clip the AI ingested.

As a result, the new Flintstones animations generated by the Allen Institute’s AI aren’t just random collages of chopped up cartoons. Instead, the researchers are able to feed the AI a very specific description of a scene, and it outputs a short clip featuring the characters, props, and locations specified—most of the time.

The quality of the animations that are generated is awful at best; no one’s going to be fooled into thinking these are the Hanna-Barbera originals. But seeing an AI generate a cartoon, featuring iconic characters, all by itself, is a fascinating sneak peek at how some films and TV shows might be made one day.

Source: This AI Can Automatically Animate New Flintstones Cartoons