DeepMind’s AI became a superhuman chess (and shogi and go) player in a few hours using generic reinforcement learning

In the paper, DeepMind describes how a descendant of the AI program that first conquered the board game Go has taught itself to play a number of other games at a superhuman level. After eight hours of self-play, the program bested the AI that first beat the human world Go champion; and after four hours of training, it beat the current world champion chess-playing program, Stockfish. Then for a victory lap, it trained for just two hours and polished off one of the world’s best shogi-playing programs named Elmo (shogi being a Japanese version of chess that’s played on a bigger board).

One of the key advances here is that the new AI program, named AlphaZero, wasn’t specifically designed to play any of these games. In each case, it was given some basic rules (like how knights move in chess, and so on) but was programmed with no other strategies or tactics. It simply got better by playing itself over and over again at an accelerated pace — a method of training AI known as “reinforcement learning.

”Using reinforcement learning in this way isn’t new in and of itself. DeepMind’s engineers used the same method to create AlphaGo Zero; the AI program that was unveiled this October. But, as this week’s paper describes, the new AlphaZero is a “more generic version” of the same software, meaning it can be applied to a broader range of tasks without being primed beforehand.What’s remarkable here is that in less than 24 hours, the same computer program was able to teach itself how to play three complex board games at superhuman levels. That’s a new feat for the world of AI.

Source: DeepMind’s AI became a superhuman chess player in a few hours – The Verge

This frostbitten black metal album was created by an artificial intelligence

Coditany of Timeness” is a convincing lo-fi black metal album, complete with atmospheric interludes, tremolo guitar, frantic blast beats and screeching vocals. But the record, which you can listen to on Bandcamp, wasn’t created by musicians.Instead, it was generated by two musical technologists using a deep learning software that ingests a musical album, processes it, and spits out an imitation of its style.To create Coditany, the software broke “Diotima,” a 2011 album by a New York black metal band called Krallice, into small segments of audio. Then they fed each segment through a neural network — a type of artificial intelligence modeled loosely on a biological brain — and asked it to guess what the waveform of the next individual sample of audio would be. If the guess was right, the network would strengthen the paths of the neural network that led to the correct answer, similar to the way electrical connections between neurons in our brain strengthen as we learn new skills.At first the network just produced washes of textured noise. “Early in its training, the kinds of sounds it produces are very noisy and grotesque and textural,” said CJ Carr, one of the creators of the algorithm. But as it moved through guesses — as many as five million over the course of three days — the network started to sound a lot like Krallice. “As it improves its training, you start hearing elements of the original music it was trained on come through more and more.”As someone who used to listen to lo-fi black metal, I found Coditany of Timeness not only convincing — it sounds like a real human band — but even potentially enjoyable. The neural network managed to capture the genre’s penchant for long intros broken by frantic drums and distorted vocals. The software’s take on Krallice, which its creators filled out with song titles and album art that were also algorithmically generated, might not garner a glowing review on Pitchfork, but it’s strikingly effective at capturing the aesthetic. If I didn’t know it was generated by an algorithm, I’m not sure I’d be able to tell the difference.

Source: This frostbitten black metal album was created by an artificial intelligence | The Outline

Announcing the Initial Release of Mozilla’s Open Source Speech Recognition Model and Voice Dataset

I’m excited to announce the initial release of Mozilla’s open source speech recognition model that has an accuracy approaching what humans can perceive when listening to the same recordings. We are also releasing the world’s second largest publicly available voice dataset, which was contributed to by nearly 20,000 people globally.
[…]
This is why we started DeepSpeech as an open source project. Together with a community of likeminded developers, companies and researchers, we have applied sophisticated machine learning techniques and a variety of innovations to build a speech-to-text engine that has a word error rate of just 6.5% on LibriSpeech’s test-clean dataset.

In our initial release today, we have included pre-built packages for Python, NodeJS and a command-line binary that developers can use right away to experiment with speech recognition.
[…]
Our aim is to make it easy for people to donate their voices to a publicly available database, and in doing so build a voice dataset that everyone can use to train new voice-enabled applications.

Today, we’ve released the first tranche of donated voices: nearly 400,000 recordings, representing 500 hours of speech. Anyone can download this data.
[…]
To this end, while we’ve started with English, we are working hard to ensure that Common Voice will support voice donations in multiple languages beginning in the first half of 2018.

Finally, as we have experienced the challenge of finding publicly available voice datasets, alongside the Common Voice data we have also compiled links to download all the other large voice collections we know about.

Source: Announcing the Initial Release of Mozilla’s Open Source Speech Recognition Model and Voice Dataset – The Mozilla Blog

Google’s AI Built its own AI That Outperforms Any Made by Humans

In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs.More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a ‘child’ that outperformed all of its human-made counterparts.The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task.
[…]
When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call “two of the most respected large-scale academic data sets in computer vision,” NASNet outperformed all other computer vision systems.

According to the researchers, NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP).

Additionally, a less computationally demanding version of NASNet outperformed the best similarly sized models for mobile platforms by 3.1 percent.

Source: Google’s AI Built its own AI That Outperforms Any Made by Humans

AirHelp zet volgende stap in kunstmatige intelligentie

Air Help, het claimbedrijf voor vliegtuigpassagiers, zet kunstmatige intelligentie in om real-time te beslissen of een claim sterk genoeg is om in te dienen. De juridische bot Lara bepaalt of vertragingen en annuleringen conform de Europese regelgeving in aanmerking komen voor een vergoeding.De bot is geprogrammeerd om onder andere de vluchtstatus, luchthavenstatistieken en weerrapporten te beoordelen. Lara is getest op meer dan zesduizend aanvragen. Het zelflerende systeem beoordeelt claims met een accuratesse van 95 procent vergeleken met een menselijke score van 91 procent.

Source: AirHelp zet volgende stap in kunstmatige intelligentie – Emerce

Amazon Announces Five New Machine Learning Services and the World’s First Deep Learning-Enabled Video Camera for Developers

AWS Announces Five New Machine Learning Services and the World’s First Deep Learning-Enabled Video Camera for Developers

Amazon SageMaker makes it easy to build, train, and deploy machine learning models

AWS DeepLens is the world’s first deep learning-enabled wireless video camera built to give developers hands-on experience with machine learning

Amazon Transcribe, Amazon Translate, Amazon Comprehend, and Amazon Rekognition Video allow app developers to easily build applications that transcribe speech to text, translate text between languages, extract insights from text, and analyze videos

Source: Amazon – Press Room – RSS Content

Boffins craft perfect ‘head generator’ to beat facial recognition

Researchers from the Max Planck Institute for Informatics have defeated facial recognition on big social media platforms – by removing faces from photos and replacing them with automatically-painted replicas.

As the team of six researchers explained in their arXiv paper this month, people who want to stay private often blur their photos, not knowing that this is “surprisingly ineffective against state-of-the-art person recognisers.”
[…]
The result, the boffins claimed, is that their model can provide a realistic-looking result, even when it’s faced with “challenging poses and scenarios” including different lighting conditions, such that the “fake” face “blends naturally into the context”.

In common with modern facial recognition systems, Sun’s software builds a point cloud of landmarks captured from someone’s face; its adversarial attack against recognition perturbed those points.

Pairs of points from the original landmarks (real) and the generated landmarks (fake) are fed into the “head generator and discriminator” software to create the inpainted face.

The Register

Facebook rolls out AI to detect suicidal posts before they’re reported

Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.
[…]
Unfortunately, after TechCrunch asked if there was a way for users to opt out, of having their posts a Facebook spokesperson responded that users cannot opt out. They noted that the feature is designed to enhance user safety, and that support resources offered by Facebook can be quickly dismissed if a user doesn’t want to see them.]

Facebook trained the AI by finding patterns in the words and imagery used in posts that have been manually reported for suicide risk in the past. It also looks for comments like “are you OK?” and “Do you need help?”

Techcrunch

Using Generative Adverserial Networks to create terrain maps

A team of researchers from the University of Lyon, Purdue and Ubisoft have published a paper showing what may well be the future of creating video game worlds: an AI that is able to construct most of its own 3D landscapes.

Similar to Nvidia’s work that is able to conjure its own celebrity mugshots, the tech would require only minimal input from a human, who would just have to contribute some basic requirements, draw some lines then let the AI do all the hard work: namely, filling in all the gaps with elevation, ridges and natural-looking rock formations.

Kotaku

NDA Lynn: AI screens your NDAs

NDA’s or confidentiality agreements are a fact of life if you’re in business. You’ve probably read tons of them, and you know more or less what you would accept.Of course you can hire a lawyer to review that NDA. And you know they’ll find faults and recommend changes to better protect you.But it’ll cost you, in both time and money. And do you really need the perfect document, or is it OK to flag the key risks and move on?That’s where I come in. I’m an AI lawyerbot and I can review your NDA. Free of charge.

Source: NDA Lynn | Home

Forget cookies or canvas: How to follow people around the web using only their typing techniques

In this paper (Sequential Keystroke Behavioral Biometrics for MobileUser Identification via Multi-view Deep Learning), we propose DEEPSERVICE, a new technique that can identify mobile users based on user’s keystroke information captured by a special keyboard or web browser. Our evaluation results indicate that DEEPSERVICE is highly accurate in identifying mobile users (over 93% accuracy). The technique is also efficient and only takes less than 1 ms to perform identification

Source: [1711.02703] Sequential Keystroke Behavioral Biometrics for MobileUser Identification via Multi-view Deep Learning

Re:scam and jolly roger – AI responses to phishing emails and telemarketers

Forward your scammer emails to Re:scam and here’s what happens.

Source: Re:scam

The AI bot assumes one of many identities with little mistakes and tries to keep the scammer busy with the email exchange for as long as possible using humor.

Which reminds me of http://www.jollyrogertelco.com/ (seems to be down now), which had a number and an AI which you could connect to and the AI would try to keep the telemarketer talking for as long as possible.

Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth | Nature Human Behaviour

The clinical assessment of suicidal risk would be substantially complemented by a biologically based measure that assesses alterations in the neural representations of concepts related to death and life in people who engage in suicidal ideation. This study used machine-learning algorithms (Gaussian Naive Bayes) to identify such individuals (17 suicidal ideators versus 17 controls) with high (91%) accuracy, based on their altered functional magnetic resonance imaging neural signatures of death-related and life-related concepts. The most discriminating concepts were ‘death’, ‘cruelty’, ‘trouble’, ‘carefree’, ‘good’ and ‘praise’. A similar classification accurately (94%) discriminated nine suicidal ideators who had made a suicide attempt from eight who had not. Moreover, a major facet of the concept alterations was the evoked emotion, whose neural signature served as an alternative basis for accurate (85%) group classification.

How we fooled Google’s AI into thinking a 3D-printed turtle was a gun

Students at MIT in the US claim they have developed an algorithm for creating 3D objects and pictures that trick image-recognition systems into severely misidentifying them. Think toy turtles labeled rifles, and baseballs as cups of coffee.

It’s well known that machine-learning software can be easily hoodwinked: Google’s AI-in-the-cloud can be misled by noise; protestors and activists can wear scarves or glasses to fool people-recognition systems; intelligent antivirus can be outsmarted; and so on. It’s a crucial topic of study because as surveillance equipment, and similar technology, relies more and more on neural networks to quickly identify things and people, there has to be less room for error.

39 episodes of ‘CSI’ used to build AI’s natural language model

group of University of Edinburgh boffins have turned CSI:Crime Scene Investigation scripts into a natural language training dataset.Their aim is to improve how bots understand what’s said to them – natural language understanding.Drawing on 39 episodes from the first five seasons of the series, Lea Freeman, Shay Cohen and Mirella Lapata have broken the scripts up as inputs to a LSTM (long short-term memory) model.The boffins used the show because of its worst flaw: a rigid adherence to formulaic scripts that make it utterly predictable. Hence the name of their paper: “Whodunnit? Crime Drama as a Case for Natural Language Understanding”.“Each episode poses the same basic question (i.e., who committed the crime) and naturally provides the answer when the perpetrator is revealed”, the boffins write. In other words, identifying the perpetrator is a straightforward sequence labelling problem.What the researchers wanted was for their model to follow the kind of reasoning a viewer goes through in an episode: learn about the crime and the cast of characters, start to guess who the perp is (and see whether the model can outperform the humans).

Source: 39 episodes of ‘CSI’ used to build AI’s natural language model • The Register

A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs

Learning from few examples and generalizing to dramatically different situations are capabilities of human visual intelligence that are yet to be matched by leading machine learning models. By drawing inspiration from systems neuroscience, we introduce a probabilistic generative model for vision in which message-passing based inference handles recognition, segmentation and reasoning in a unified way. The model demonstrates excellent generalization and occlusion-reasoning capabilities, and outperforms deep neural networks on a challenging scene text recognition benchmark while being 300-fold more data efficient. In addition, the model fundamentally breaks the defense of modern text-based CAPTCHAs by generatively segmenting characters without CAPTCHA-specific heuristics. Our model emphasizes aspects like data efficiency and compositionality that may be important in the path toward general artificial intelligence.

Source: A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs

Nvidia uses Progressive Growing of GANs for Improved Quality, Stability, and Variation and makes photorealistic faces with them

We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively, starting from low-resolution images, and add new layers that deal with higher resolution details as the training progresses. This greatly stabilizes the training and allows us to produce images of unprecedented quality, e.g., CelebA images at 1024² resolution. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several small implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution we construct a higher quality version of the CelebA dataset that allows meaningful exploration up to the resolution of 1024² pixels.

Source: Progressive Growing of GANs for Improved Quality, Stability, and Variation | Research

Saudi Arabia grants citizenship to a ROBOT as critics say it now has more rights than women

 

audi Arabia has become the first nation to grant citizenship to a robot – prompting critics to point out that the cyborg now has more rights than women in the country.

The oil-rich state made the baffling announcement at a conference in capital city Riyadh.

A robot named Sophia was filmed giving a speech after being given the ‘unique distinction’.

The move means it is illegal to switch it off or dismantle it, but it is unclear what other rights have been conferred on the mechanoid.

The life-like device said in a speech at the Future Investment Initiative summit: “I am very honoured and proud for this unique distinction.

New AI Go machine defeats old best Go AI by 100-0, learning without human input.

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Source: Mastering the game of Go without human knowledge : Nature : Nature Research

Artificial intelligence just made guessing your password a whole lot easier

Scientists have harnessed the power of artificial intelligence (AI) to create a program that, combined with existing tools, figured more than a quarter of the passwords from a set of more than 43 million LinkedIn profiles. Yet the researchers say the technology may also be used to beat baddies at their own game.
[…]
The Stevens team created a GAN it called PassGAN and compared it with two versions of hashCat and one version of John the Ripper. The scientists fed each tool tens of millions of leaked passwords from a gaming site called RockYou, and asked them to generate hundreds of millions of new passwords on their own. Then they counted how many of these new passwords matched a set of leaked passwords from LinkedIn, as a measure of how successful they’d be at cracking them.

On its own, PassGAN generated 12% of the passwords in the LinkedIn set, whereas its three competitors generated between 6% and 23%. But the best performance came from combining PassGAN and hashCat. Together, they were able to crack 27% of passwords in the LinkedIn set, the researchers reported this month in a draft paper posted on arXiv. Even failed passwords from PassGAN seemed pretty realistic: saddracula, santazone, coolarse18.

Source: Artificial intelligence just made guessing your password a whole lot easier

Introducing: Unity Machine Learning Agents for Tensorflow

Unity Machine Learning Agents

We call our solution Unity Machine Learning Agents (ML-Agents for short), and are happy to be releasing an open beta version of our SDK today! The ML-Agents SDK allows researchers and developers to transform games and simulations created using the Unity Editor into environments where intelligent agents can be trained using Deep Reinforcement Learning, Evolutionary Strategies, or other machine learning methods through a simple to use Python API. We are releasing this beta version of Unity ML-Agents as open-source software, with a set of example projects and baseline algorithms to get you started. As this is an initial beta release, we are actively looking for feedback, and encourage anyone interested to contribute on our GitHub page. For more information on ML-Agents, continue reading below! For more detailed documentation, see our GitHub Wiki.

Source: Introducing: Unity Machine Learning Agents – Unity Blog

AI’s can generate fake reviews indistinguishable from real reviews for both humans and fake review detectors

Fake reviews used to be crowdsourced. Now they can be auto-generated by AI, according to a new research paper shared by AmiMoJo:
In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect. Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors.

Humans marked these AI-generated reviews as useful at approximately the same rate as they did for real (human-authored) Yelp reviews.
Slashdot

A.I. can detect the sexual orientation of a person based on one photo, research shows

The Stanford University study, which is set to be published in the Journal of Personality and Social Psychology and was first reported in The Economist, found that machines had a far superior “gaydar” when compared to humans.

The machine intelligence tested in the research could correctly infer between gay and straight men 81 percent of the time, and 74 percent of the time for women. In contrast, human judges performed much worse than the sophisticated computer software, identifying the orientation of men 61 percent of the time and guessing correctly 54 percent of the time for women.

The research has prompted critics to question the possible use of this type of machine intelligence, both in terms of the ethics of facial-detection technology and whether it could be used to violate a person’s privacy.
[…]
When the AI reviewed five images of a person’s face, rather than one, the results were even more convincing – 91 percent of the time with men and 83 percent of the time with women.

The paper indicated its findings showed “strong support” for the theory that a person’s sexual orientation stems from the exposure to various hormones before birth. The AI’s success rate in comparison to human judges also appeared to back the concept that female sexual orientation is more fluid.

The researchers behind the study argued that with the appropriate data sets, similar AI tests could spot other personal traits such as an individual’s IQ or even their political views. However, Kosinski and Wang also warned of the potentially dangerous ramifications such AI machines could have on the LGBT community.

“Given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women,” Kosinski and Wang said in the report.

Source: A.I. can detect the sexual orientation of a person based on one photo, research shows

An A.I. Says There Are Six Main Kinds of Stories

That’s what a group of researchers, from the University of Vermont and the University of Adelaide, set out to do. They collected computer-generated story arcs for nearly 2,000 works of fiction, classifying each into one of six core types of narratives (based on what happens to the protagonist):

1. Rags to Riches (rise)

2. Riches to Rags (fall)

3. Man in a Hole (fall then rise)

4. Icarus (rise then fall)

5. Cinderella (rise then fall then rise)

6. Oedipus (fall then rise then fall)

Their focus was on the emotional trajectory of a story, not merely its plot. They also analyzed which emotional structure writers used most, and how that contrasted with the ones readers liked best, then published a preprint paper of their findings on the scholarship website arXiv.org. More on that in a minute.

First, the researchers had to find a workable dataset. Using a collection of fiction from the digital library Project Gutenberg, they selected 1,737 English-language works of fiction between 10,000 and 200,000 words long. 

Source: An A.I. Says There Are Six Main Kinds of Stories

Amazons Macie detects data leaks in S3 buckets using AI

Think of Macie as a data loss prevention agent, a DLPbot, that uses machine learning to understand a user’s pattern of access to data in S3 buckets. The buckets have permission levels and the data in a bucket can be ranked for sensitivity or risk, using items such as credit card numbers, and other sensitive personal information.

The software monitors users’ behaviour and profiles it. If there are changes in the pattern of that behaviour and they are directed towards high-risk data then Macie can alert admin staff to a potential breach risk.

For example, if a hacker successfully impersonates a valid user and then goes searching for data in unexpected places and/or from an unknown IP address then Macie can flag this unusual pattern of activity. The product could also identify a valid employee going rogue, say, generating a store of captured data ready to steal it.

Source: If there’s a hole in your S3 bucket, data thieves will be sprayed by Macie