The Linkielist

Linking ideas with the world

The Linkielist

Humanitarian Data Exchange

The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. Our growing collection of datasets has been accessed by users in over 200 countries and territories. Watch this video to learn more.

HDX is managed by OCHA’s Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.

[…]

We define humanitarian data as:

  1. data about the context in which a humanitarian crisis is occurring (e.g., baseline/development data, damage assessments, geospatial data)
  2. data about the people affected by the crisis and their needs
  3. data about the response by organisations and people seeking to help those who need assistance.

HDX uses an open-source software called CKAN for our technical back-end. You can find all of our code on GitHub.

Source: Welcome – Humanitarian Data Exchange

How Facebook is Using Machine Learning to Map the World Population

When it comes to knowing where humans around the world actually live, resources come in varying degrees of accuracy and sophistication.

Heavily urbanized and mature economies generally produce a wealth of up-to-date information on population density and granular demographic data. In rural Africa or fast-growing regions in the developing world, tracking methods cannot always keep up, or in some cases may be non-existent.

This is where new maps, produced by researchers at Facebook, come in. Building upon CIESIN’s Gridded Population of the World project, Facebook is using machine learning models on high-resolution satellite imagery to paint a definitive picture of human settlement around the world. Let’s zoom in.

Connecting the Dots

Will all other details stripped away, human settlement can form some interesting patterns. One of the most compelling examples is Egypt, where 95% of the population lives along the Nile River. Below, we can clearly see where people live, and where they don’t.

View the full-resolution version of this map.

facebook population density egypt map

While it is possible to use a tool like Google Earth to view nearly any location on the globe, the problem is analyzing the imagery at scale. This is where machine learning comes into play.

Finding the People in the Petabytes

High-resolution imagery of the entire globe takes up about 1.5 petabytes of storage, making the task of classifying the data extremely daunting. It’s only very recently that technology was up to the task of correctly identifying buildings within all those images.

To get the results we see today, researchers used process of elimination to discard locations that couldn’t contain a building, then ranked them based on the likelihood they could contain a building.

process of elimination map

Facebook identified structures at scale using a process called weakly supervised learning. After training the model using large batches of photos, then checking over the results, Facebook was able to reach a 99.6% labeling accuracy for positive examples.

Why it Matters

An accurate picture of where people live can be a matter of life and death.

For humanitarian agencies working in Africa, effectively distributing aid or vaccinating populations is still a challenge due to the lack of reliable maps and population density information. Researchers hope that these detailed maps will be used to save lives and improve living conditions in developing regions.

For example, Malawi is one of the world’s least urbanized countries, so finding its 19 million citizens is no easy task for people doing humanitarian work there. These maps clearly show where people live and allow organizations to create accurate population density estimates for specific areas.

rural malawi population pattern map

Visit the project page for a full explanation and to access the full database of country maps.

Source: How Facebook is Using Machine Learning to Map the World Population

Meet the AI robots being used to help solve America’s recycling crisis

The way the robots work is simple. Guided by cameras and computer systems trained to recognize specific objects, the robots’ arms glide over moving conveyor belts until they reach their target. Oversized tongs or fingers with sensors that are attached to the arms snag cans, glass, plastic containers, and other recyclable items out of the rubbish and place them into nearby bins.

The robots — most of which have come online only within the past year — are assisting human workers and can work up to twice as fast. With continued improvements in the bots’ ability to spot and extract specific objects, they could become a formidable new force in the $6.6 billion U.S. industry.

Researchers like Lily Chin, a PhD. student at the Distributed Robotics Lab at MIT, are working to develop sensors for these robots that can improve their tactile capabilities and improve their sense of touch so they can determine plastic, paper and metal through their fingers. “Right now, robots are mostly reliant on computer vision, but they can get confused and make mistakes,” says Chin. “So now we want to integrate these new tactile capabilities.”

Denver-based AMP Robotics, is one of the companies on the leading edge of innovation in the field. It has developed software — a AMP Neuron platform that uses computer vision and machine learning — so robots can recognize different colors, textures, shapes, sizes and patterns to identify material characteristics so they can sort waste.

The robots are being installed at the Single Stream Recyclers plant in Sarasota, Florida and they will be able to pick 70 to 80 items a minute, twice as fast as humanly possible and with greater accuracy.

CNBC: trash seperating robot
Bulk Handling Systems Max-AI AQC-C robot
Bulk Handling Systems

“Using this technology you can increase the quality of the material and in some cases double or triple its resale value,” says AMP Robotics CEO Mantaya Horowitz. “Quality standards are getting stricter that’s why companies and researchers are working on high tech solutions.”

Source: Meet the robots being used to help solve America’s recycling crisis

Intellectual Debt (in AI): With Great Power Comes Great Ignorance

For example, aspirin was discovered in 1897, and an explanation of how it works followed in 1995. That, in turn, has spurred some research leads on making better pain relievers through something other than trial and error.

This kind of discovery — answers first, explanations later — I call “intellectual debt.” We gain insight into what works without knowing why it works. We can put that insight to use immediately, and then tell ourselves we’ll figure out the details later. Sometimes we pay off the debt quickly; sometimes, as with aspirin, it takes a century; and sometimes we never pay it off at all.

Be they of money or ideas, loans can offer great leverage. We can get the benefits of money — including use as investment to produce more wealth — before we’ve actually earned it, and we can deploy new ideas before having to plumb them to bedrock truth.

Indebtedness also carries risks. For intellectual debt, these risks can be quite profound, both because we are borrowing as a society, rather than individually, and because new technologies of artificial intelligence — specifically, machine learning — are bringing the old model of drug discovery to a seemingly unlimited number of new areas of inquiry. Humanity’s intellectual credit line is undergoing an extraordinary, unasked-for bump up in its limit.

[…]

Technical debt arises when systems are tweaked hastily, catering to an immediate need to save money or implement a new feature, while increasing long-term complexity. Anyone who has added a device every so often to a home entertainment system can attest to the way in which a series of seemingly sensible short-term improvements can produce an impenetrable rat’s nest of cables. When something stops working, this technical debt often needs to be paid down as an aggravating lump sum — likely by tearing the components out and rewiring them in a more coherent manner.

[…]

Machine learning has made remarkable strides thanks to theoretical breakthroughs, zippy new hardware, and unprecedented data availability. The distinct promise of machine learning lies in suggesting answers to fuzzy, open-ended questions by identifying patterns and making predictions.

[…]

Researchers have pointed out thorny problems of technical debt afflicting AI systems that make it seem comparatively easy to find a retiree to decipher a bank system’s COBOL. They describe how machine learning models become embedded in larger ones and then be forgotten, even as their original training data goes stale and their accuracy declines.

But machine learning doesn’t merely implicate technical debt. There are some promising approaches to building machine learning systems that in fact can offer some explanations — sometimes at the cost of accuracy — but they are the rare exceptions. Otherwise, machine learning is fundamentally patterned like drug discovery, and it thus incurs intellectual debt. It stands to produce answers that work, without offering any underlying theory. While machine learning systems can surpass humans at pattern recognition and predictions, they generally cannot explain their answers in human-comprehensible terms. They are statistical correlation engines — they traffic in byzantine patterns with predictive utility, not neat articulations of relationships between cause and effect. Marrying power and inscrutability, they embody Arthur C. Clarke’s observation that any sufficiently advanced technology is indistinguishable from magic.

But here there is no David Copperfield or Ricky Jay who knows the secret behind the trick. No one does. Machine learning at its best gives us answers as succinct and impenetrable as those of a Magic 8-Ball — except they appear to be consistently right. When we accept those answers without independently trying to ascertain the theories that might animate them, we accrue intellectual debt.

Source: Intellectual Debt: With Great Power Comes Great Ignorance

Waymo and DeepMind mimic evolution to develop a new, better way to train self-driving AI

The two worked together to bring a training method called Population Based Training (PBT for short) to bear on Waymo’s challenge of building better virtual drivers, and the results were impressive — DeepMind says in a blog post that using PBT decreased by 24% false positives in a network that identifies and places boxes around pedestrians, bicyclists and motorcyclists spotted by a Waymo vehicle’s many sensors. Not only that, but is also resulted in savings in terms of both training time and resources, using about 50% of both compared to standard methods that Waymo was using previously.

[…]

To step back a little, let’s look at what PBT even is. Basically, it’s a method of training that takes its cues from how Darwinian evolution works. Neural nets essentially work by trying something and then measuring those results against some kind of standard to see if their attempt is more “right” or more “wrong” based on the desired outcome

[…]

But all that comparative training requires a huge amount of resources, and sorting the good from the bad in terms of which are working out relies on either the gut feeling of individual engineers, or massive-scale search with a manual component involved where engineers “weed out” the worst performing neural nets to free up processing capabilities for better ones.

What DeepMind and Waymo did with this experiment was essentially automate that weeding, automatically killing the “bad” training and replacing them with better-performing spin-offs of the best-in-class networks running the task. That’s where evolution comes in, since it’s kind of a process of artificial natural selection.

Source: Waymo and DeepMind mimic evolution to develop a new, better way to train self-driving AI | TechCrunch

Wow, I hate when people actually write at you to read a sentence again (cut out for your mental wellness).

IBM gives cancer-killing drug AI projects to the open source community

Researchers from IBM’s Computational Systems Biology group in Zurich are working on AI and machine learning (ML) approaches to “help to accelerate our understanding of the leading drivers and molecular mechanisms of these complex diseases,” as well as methods to improve our knowledge of tumor composition.

“Our goal is to deepen our understanding of cancer to equip industries and academia with the knowledge that could potentially one day help fuel new treatments and therapies,” IBM says.

The first project, dubbed PaccMann — not to be confused with the popular Pac-Man computer game — is described as the “Prediction of anticancer compound sensitivity with Multi-modal attention-based neural networks.”

[…]

The ML algorithm exploits data on gene expression as well as the molecular structures of chemical compounds. IBM says that by identifying potential anti-cancer compounds earlier, this can cut the costs associated with drug development.

[…]

The second project is called “Interaction Network infErence from vectoR representATions of words,” otherwise known as INtERAcT. This tool is a particularly interesting one given its automatic extraction of data from valuable scientific papers related to our understanding of cancer.

With roughly 17,000 papers published every year in the field of cancer research, it can be difficult — if not impossible — for researchers to keep up with every small step we make in our understanding.

[…]

INtERAcT aims to make the academic side of research less of a burden by automatically extracting information from these papers. At the moment, the tool is being tested on extracting data related to protein-protein interactions — an area of study which has been marked as a potential cause of the disruption of biological processes in diseases including cancer.

[…]

The third and final project is “pathway-induced multiple kernel learning,” or PIMKL. This algorithm utilizes datasets describing what we currently know when it comes to molecular interactions in order to predict the progression of cancer and potential relapses in patients.

PIMKL uses what is known as multiple kernel learning to identify molecular pathways crucial for categorizing patients, giving healthcare professionals an opportunity to individualize and tailor treatment plans.

PaccMann and INtERAcT‘s code has been released and are available on the projects’ websites. PIMKL has been deployed on the IBM Cloud and the source code has also been released.

Source: IBM gives cancer-killing drug AI project to the open source community | ZDNet

But  now the big question: will they maintain it?

Machine learning has been used to automatically translate long-lost languages

Jiaming Luo and Regina Barzilay from MIT and Yuan Cao from Google’s AI lab in Mountain View, California. This team has developed a machine-learning system capable of deciphering lost languages, and they’ve demonstrated it by having it decipher Linear B—the first time this has been done automatically. The approach they used was very different from the standard machine translation techniques.

First some background. The big idea behind machine translation is the understanding that words are related to each other in similar ways, regardless of the language involved.

So the process begins by mapping out these relations for a specific language. This requires huge databases of text. A machine then searches this text to see how often each word appears next to every other word. This pattern of appearances is a unique signature that defines the word in a multidimensional parameter space. Indeed, the word can be thought of as a vector within this space. And this vector acts as a powerful constraint on how the word can appear in any translation the machine comes up with.

These vectors obey some simple mathematical rules. For example: king – man + woman = queen. And a sentence can be thought of as a set of vectors that follow one after the other to form a kind of trajectory through this space.

The key insight enabling machine translation is that words in different languages occupy the same points in their respective parameter spaces. That makes it possible to map an entire language onto another language with a one-to-one correspondence.

In this way, the process of translating sentences becomes the process of finding similar trajectories through these spaces. The machine never even needs to “know” what the sentences mean.

This process relies crucially on the large data sets. But a couple of years ago, a German team of researchers showed how a similar approach with much smaller databases could help translate much rarer languages that lack the big databases of text. The trick is to find a different way to constrain the machine approach that doesn’t rely on the database.

Now Luo and co have gone further to show how machine translation can decipher languages that have been lost entirely. The constraint they use has to do with the way languages are known to evolve over time.

The idea is that any language can change in only certain ways—for example, the symbols in related languages appear with similar distributions, related words have the same order of characters, and so on. With these rules constraining the machine, it becomes much easier to decipher a language, provided the progenitor language is known.  

Luo and co put the technique to the test with two lost languages, Linear B and Ugaritic. Linguists know that Linear B encodes an early version of ancient Greek and that Ugaritic, which was discovered  in 1929, is an early form of Hebrew.

Given that information and the constraints imposed by linguistic evolution, Luo and co’s machine is able to translate both languages with remarkable accuracy. “We were able to correctly translate 67.3% of Linear B cognates into their Greek equivalents in the decipherment scenario,” they say. “To the best of our knowledge, our experiment is the first attempt of deciphering Linear B automatically.”

That’s impressive work that takes machine translation to a new level. But it also raises the interesting question of other lost languages—particularly those that have never been deciphered, such as Linear A.

In this paper, Linear A is conspicuous by its absence. Luo and co do not even mention it, but it must loom large in their thinking, as it does for all linguists. Yet significant breakthroughs are still needed before this script becomes amenable to machine translation.

For example, nobody knows what language Linear A encodes. Attempts to decipher it into ancient Greek have all failed. And without the progenitor language, the new technique does not work.

But the big advantage of machine-based approaches is that they can test one language after another quickly without becoming fatigued. So it’s quite possible that Luo and co might tackle Linear A with a brute-force approach—simply attempt to decipher it into every language for which machine translation already operates.

 

Source: Machine learning has been used to automatically translate long-lost languages – MIT Technology Review

Good luck deleting someone’s private info from a trained neural network – it’s likely to bork the whole thing

AI systems have weird memories. The machines desperately cling onto the data they’ve been trained on, making it difficult to delete bits of it. In fact, they often have to be completely retrained from scratch with the newer, smaller dataset.

That’s no good in an age where individuals can request their personal data be removed from company databases under the EU GDPR rules. How do you remove a person’s data from a machine learning that has already been trained? A 2017 research paper by law and policy academics hinted that it may even be impossible.

“Deletion is difficult because most machine learning models are complex black boxes so it is not clear how a data point or a set of data point is really being used,” James Zou, an assistant professor of biomedical data science at Stanford University, told The Register.

In order to leave out specific data, models will often have to be retrained with the newer, smaller dataset. That’s a pain as it costs money and time.

The research, led by Antonio Ginart, a PhD student at Stanford University, studied the problem of trying to delete data in machine learning models and managed to craft two “provably deletion efficient algorithms” to remove data across six different datasets for k-means clustering models, a machine learning method to develop classifiers. The results have been released in a paper in arXiv this week.

The trick is to assess the impacts of deleting data from a trained model. In some cases, it can lead to a decrease in the system’s performance.

“First, quickly check to see if deleting a data point would have any effect on the machine learning model at all – there are settings where there’s no effect and so we can perform this check very efficiently. Second, see if the data to be deleted only affects some local component of the learning system and just update locally,” Zou explained.

It seems to work okay for k-means clustering models under certain circumstances, when the data can be more easily separated. But when it comes to systems that aren’t deterministic like modern deep learning models, it’s incredibly difficult to delete data.

Zou said it isn’t entirely impossible, however. “We don’t have tools just yet but we are hoping to develop these deletion tools in the next few months.” ®

Source: Good luck deleting someone’s private info from a trained neural network – it’s likely to bork the whole thing • The Register

‘Superhuman’ AI Crushes Poker Pros at Six-Player Texas Hold’em

Computer scientists have developed a card-playing bot, called Pluribus, capable of defeating some of the world’s best players at six-person no-limit Texas hold’em poker, in what’s considered an important breakthrough in artificial intelligence.

Two years ago, a research team from Carnegie Mellon University developed a similar poker-playing system, called Libratus, which consistently defeated the world’s best players at one-on-one Heads-Up, No-Limit Texas Hold’em poker. The creators of Libratus, Tuomas Sandholm and Noam Brown, have now upped the stakes, unveiling a new system capable of playing six-player no-limit Texas hold’em poker, a wildly popular version of the game.

In a series of contests, Pluribus handedly defeated its professional human opponents, at a level the researchers described as “superhuman.” When pitted against professional human opponents with real money involved, Pluribus managed to collect winnings at an astounding rate of $1,000 per hour. Details of this achievement were published today in Science.

[…]

For the new study, Brown and Sandholm subjected Pluribus to two challenging tests. The first pitted Pluribus against 13 different professional players—all of whom have earned more than $1 million in poker winnings—in the six-player version of the game. The second test involved matches featuring two poker legends, Darren Elia and Chris “Jesus” Ferguson, each of whom was pitted against five identical copies of Pluribus.

The matches with five humans and Pluribus involved 10,000 hands played over 12 days. To incentivize the human players, a total of $50,000 was distributed among the participants, Pluribus included. The games were blind in that none of the human players were told who they were playing, though each player had a consistent alias used throughout the competition. For the tests involving a lone human and five Pluribuses, each player was given $2,000 for participating and a bonus $2,000 for playing better than their human cohort. Elia and Ferguson both played 5,000 separate hands against their machine opponents.

In all scenarios, Pluribus registered wins with “statistical significance,” and to a degree the researchers referred to as “superhuman.”

“We mean superhuman in the sense that it performs better than the best humans,” said Brown, who is completing his Ph.D. as a research scientist at Facebook AI. “The bot won by about five big blinds per hundred hands of poker (bb/100) when playing against five elite human professionals, which professionals consider to be a very high win rate. To beat elite professionals by that margin is considered a decisive win.

[…]

Before the competition started, Pluribus developed its own “blueprint” strategy, which it did by playing poker with itself for eight straight days.

“Pluribus does not use any human gameplay data to form its strategy,” explained Brown. “Instead, Pluribus first uses self-play, in which it plays against itself over trillions of hands to formulate a basic strategy. It starts by playing completely randomly. As it plays more and more hands against itself, its strategy gradually improves as it learns which actions lead to winning more money. This is all done offline before ever playing against humans.”

Armed with its blueprint strategy, the competitions could begin. After the first bets were placed, Pluribus calculated several possible next moves for each opponent, in a manner similar to how machines play chess and Go. The difference here, however, is that Pluribus was not tasked to calculate the entire game, as that would be “computationally prohibitive,” as noted by the researchers.

“In Pluribus, we used a new way of doing search that doesn’t have to search all the way to the end of the game,” said Brown. “Instead, it can stop after a few moves. This makes the search algorithm much more scalable. In particular, it allows us to reach superhuman performance while only training for the equivalent of less than $150 on a cloud computing service, and playing in real time on just two CPUs.”

[…]

Importantly, Pluribus was also programmed to be unpredictable—a fundamental aspect of good poker gamesmanship. If Pluribus consistently bet tons of money when it figured it had the best hand, for example, its opponents would eventually catch on. To remedy this, the system was programmed to play in a “balanced” manner, employing a set of strategies, like bluffing, that prevented Pluribus’ opponents from picking up on its tendencies and habits.

Source: ‘Superhuman’ AI Crushes Poker Pros at Six-Player Texas Hold’em

AI Trained on Old Scientific Papers Makes Discoveries Humans Missed

In a study published in Nature on July 3, researchers from the Lawrence Berkeley National Laboratory used an algorithm called Word2Vec sift through scientific papers for connections humans had missed. Their algorithm then spit out predictions for possible thermoelectric materials, which convert heat to energy and are used in many heating and cooling applications.

The algorithm didn’t know the definition of thermoelectric, though. It received no training in materials science. Using only word associations, the algorithm was able to provide candidates for future thermoelectric materials, some of which may be better than those we currently use.

[…]

To train the algorithm, the researchers assessed the language in 3.3 million abstracts related to material science, ending up with a vocabulary of about 500,000 words. They fed the abstracts to Word2vec, which used machine learning to analyze relationships between words.

“The way that this Word2vec algorithm works is that you train a neural network model to remove each word and predict what the words next to it will be,” Jain said. “By training a neural network on a word, you get representations of words that can actually confer knowledge.”

Using just the words found in scientific abstracts, the algorithm was able to understand concepts such as the periodic table and the chemical structure of molecules. The algorithm linked words that were found close together, creating vectors of related words that helped define concepts. In some cases, words were linked to thermoelectric concepts but had never been written about as thermoelectric in any abstract they surveyed. This gap in knowledge is hard to catch with a human eye, but easy for an algorithm to spot.

After showing its capacity to predict future materials, researchers took their work back in time, virtually. They scrapped recent data and tested the algorithm on old papers, seeing if it could predict scientific discoveries before they happened. Once again, the algorithm worked.

In one experiment, researchers analyzed only papers published before 2009 and were able to predict one of the best modern-day thermoelectric materials four years before it was discovered in 2012.

This new application of machine learning goes beyond materials science. Because it’s not trained on a specific scientific dataset, you could easily apply it to other disciplines, retraining it on literature of whatever subject you wanted. Vahe Tshitoyan, the lead author on the study, says other researchers have already reached out, wanting to learn more.

“This algorithm is unsupervised and it builds its own connections,” Tshitoyan said. “You could use this for things like medical research or drug discovery. The information is out there. We just haven’t made these connections yet because you can’t read every article.”

Source: AI Trained on Old Scientific Papers Makes Discoveries Humans Missed – VICE

That this AI can simulate universes in 30ms is not the scary part. It’s that its creators don’t know why it works so well

The accuracy of the neural network is judged by how similar its outputs were to the ones created by two more traditional N-body simulation systems, FastPM and 2LPT, when all three are given the same inputs. When D3M was tasked with producing 1,000 simulations from 1,000 sets of input data, it had a relative error of 2.8 per cent compared to FastPM, and a 9.3 per cent compared to 2LPT for the same inputs. That’s not too bad, considering it takes the model just 30 milliseconds to crank out a simulation. Not only does that save time, but it’s also cheaper too since less compute power is needed.

To their surprise, the researchers also noticed that D3M seemed to be able to produce simulations of the universe from conditions that weren’t specifically included in the training data. During inference tests, the team tweaked input variables such as the amount of dark matter in the virtual universes, and the model still managed to spit out accurate simulations despite not being specifically trained for these changes.

“It’s like teaching image recognition software with lots of pictures of cats and dogs, but then it’s able to recognize elephants,” said Shirley Ho, first author of the paper and a group leader at the Flatiron Institute. “Nobody knows how it does this, and it’s a great mystery to be solved.

“We can be an interesting playground for a machine learner to use to see why this model extrapolates so well, why it extrapolates to elephants instead of just recognizing cats and dogs. It’s a two-way street between science and deep learning.”

The source code for the neural networks can be found here.

Source: That this AI can simulate universes in 30ms is not the scary part. It’s that its creators don’t know why it works so well • The Register

EU should ban AI-powered citizen scoring and mass surveillance, say experts

A group of policy experts assembled by the EU has recommended that it ban the use of AI for mass surveillance and mass “scoring of individuals”; a practice that potentially involves collecting varied data about citizens — everything from criminal records to their behavior on social media — and then using it to assess their moral or ethical integrity.

The recommendations are part of the EU’s ongoing efforts to establish itself as a leader in so-called “ethical AI.” Earlier this year, it released its first guidelines on the topic, stating that AI in the EU should be deployed in a trustworthy and “human-centric” manner.

The new report offers more specific recommendations. These include identifying areas of AI research that require funding; encouraging the EU to incorporate AI training into schools and universities; and suggesting new methods to monitor the impact of AI. However, the paper is only a set of recommendations at this point, and not a blueprint for legislation.

Notably, the suggestions that the EU should ban AI-enabled mass scoring and limit mass surveillance are some of the report’s relatively few concrete recommendations. (Often, the report’s authors simply suggest that further investigation is needed in this or that area.)

Source: EU should ban AI-powered citizen scoring and mass surveillance, say experts – The Verge

Samsung’s AI animates paintings and photos without 3D modeling

Engineers and researchers from Samsung’s AI Center in Moscow and Skolkovo Institute of Science and Technology have created a model that can generate realistic animated talking heads from images without relying on traditional methods, like 3D modeling.

[…]

“Effectively, the learned model serves as a realistic avatar of a person,” said engineer Egor Zakharov in a video explaining the results.

Well-known faces seen in the paper include Marilyn Monroe, Albert Einstein, Leonardo da Vinci’s Mona Lisa, and RZA from the Wu Tang Clan. The technology that focuses on synthesizing photorealistic head images and facial landmarks could be applied to video games, video conferences, or digital avatars like the kind now available on Samsung’s Galaxy S10Facebook is also working on realistic avatars for its virtual reality initiatives.

Such tech could clearly also be used to create deepfakes.

Few-shot learning means the model can begin to animate a face using just a few images of an individual, or even a single image. Meta training with the VoxCeleb2 data set of videos is carried out before the model can animate previously unseen faces.

During the training process, the system creates three neural networks: The embedded network maps frames to vectors, a generator network maps facial landmarks in the synthesized video, and a discriminator network assesses the realism and pose of the generated images.

Source: Samsung’s AI animates paintings and photos without 3D modeling | VentureBeat

DARPA wants to develop AI fighter program to augment human pilots

DARPA, the US military research arm, has launched a program to train fighter jets to engage in aerial battle autonomously with the help of AI algorithms.

The Air Combat Evolution (ACE) program seeks to create military planes that are capable of performing combat maneuvers for dogfighting without the help of human pilots. Vehicles won’t be completely unmanned, however. DARPA is more interested in forging stronger teamwork between humans and machines.

The end goal is to have autonomous jet controls that can handle tasks like dodging out the way of enemy fire at lightning speeds, while the pilot takes on more difficult problems like executing strategic battle commands and firing off weapons.

“We envision a future in which AI handles the split-second maneuvering during within-visual-range dogfights, keeping pilots safer and more effective as they orchestrate large numbers of unmanned systems into a web of overwhelming combat effects,” said Lieutenant Colonel Dan Javorsek, ACE program manager.

It’s part of DARPA’s larger vision of “mosaic warfare.” The idea here is that combat is fought by a mixture of manned and unmanned systems working together. The hope is these unmanned systems can be rapidly developed, and are easily adaptable through technological upgrades so that they can help the military cope with changing conditions.

“Linking together manned aircraft with significantly cheaper unmanned systems creates a ‘mosaic’ where the individual ‘pieces’ can easily be recomposed to create different effects or quickly replaced if destroyed, resulting in a more resilient warfighting capability,” DARPA said in a statement.

The ACE program will initially focus on teaching AI in a similar way that new pilots are trained. Computer vision algorithms will learn basic battle maneuvers for close one-on-one combat. “Only after human pilots are confident that the AI algorithms are trustworthy in handling bounded, transparent and predictable behaviors will the aerial engagement scenarios increase in difficulty and realism,” Javorsek said.

“Following virtual testing, we plan to demonstrate the dogfighting algorithms on sub-scale aircraft leading ultimately to live, full-scale manned-unmanned team dogfighting with operationally representative aircraft.”

DARPA is welcoming R&D proposals from academics and companies for its program and will fund the effort. Successful candidates will engage in the “AlphaDogfight Trials,” where these AI-crafter fighter planes will test one another in a competition to find the best algorithm.

“Being able to trust autonomy is critical as we move toward a future of warfare involving manned platforms fighting alongside unmanned systems,” said Javorsek.

Source: Take my bits awaaaay: DARPA wants to develop AI fighter program to augment human pilots • The Register

Amazing AI Generates Entire Bodies of People Who Don’t Exist

A new deep learning algorithm can generate high-resolution, photorealistic images of people — faces, hair, outfits, and all — from scratch.

The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trust in digital media.

[…]

In a video showing off the tech, the AI morphs and poses model after model as their outfits transform, bomber jackets turning into winter coats and dresses melting into graphic tees.

Specifically, the new algorithm is a Generative Adversarial Network (GAN). That’s the kind of AI typically used to churn out new imitations of something that exists in the real world, whether they be video game levels or images that look like hand-drawn caricatures.

Source: Amazing AI Generates Entire Bodies of People Who Don’t Exist

Research Findings May Lead to More Explainable AI | College of Computing

Why did the frog cross the road? Well, a new artificial intelligent (AI) agent that can play the classic arcade game Frogger not only can tell you why it crossed the road, but it can justify its every move in everyday language.

Developed by Georgia Tech, in collaboration with Cornell and the University of Kentucky, the work enables an AI agent to provide a rationale for a mistake or errant behavior, and to explain it in a way that is easy for non-experts to understand.

This, the researchers say, may help robots and other types of AI agents seem more relatable and trustworthy to humans. They also say their findings are an important step toward a more transparent, human-centered AI design that understands people’s preferences and prioritizes people’s needs.

“If the power of AI is to be democratized, it needs to be accessible to anyone regardless of their technical abilities,” said Upol Ehsan, Ph.D. student in the School of Interactive Computing at Georgia Tech and lead researcher.

“As AI pervades all aspects of our lives, there is a distinct need for human-centered AI design that makes black-boxed AI systems explainable to everyday users. Our work takes a formative step toward understanding the role of language-based explanations and how humans perceive them.”

The study was supported by the Office of Naval Research (ONR).

Researchers developed a participant study to determine if their AI agent could offer rationales that mimicked human responses. Spectators watched the AI agent play the videogame Frogger and then ranked three on-screen rationales in order of how well each described the AI’s game move.

Of the three anonymized justifications for each move – a human-generated response, the AI-agent response, and a randomly generated response – the participants preferred the human-generated rationales first, but the AI-generated responses were a close second.

Frogger offered the researchers the chance to train an AI in a “sequential decision-making environment,” which is a significant research challenge because decisions that the agent has already made influence future decisions. Therefore, explaining the chain of reasoning to experts is difficult, and even more so when communicating with non-experts, according to researchers.

[…]

By a 3-to-1 margin, participants favored answers that were classified in the “complete picture” category. Responses showed that people appreciated the AI thinking about future steps rather than just what was in the moment, which might make them more prone to making another mistake. People also wanted to know more so that they might directly help the AI fix the errant behavior.

[…]

The research was presented in March at the Association for Computing Machinery’s Intelligent User Interfaces 2019 Conference. The paper is titled Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions. Ehsan will present a position paper highlighting the design and evaluation challenges of human-centered Explainable AI systems at the upcoming Emerging Perspectives in Human-Centered Machine Learning workshop at the ACM CHI 2019 conference, May 4-9, in Glasgow, Scotland.

Source: Research Findings May Lead to More Explainable AI | College of Computing

AI predicts hospital readmission rates from clinical notes

Electronic health records store valuable information about hospital patients, but they’re often sparse and unstructured, making them difficult for potentially labor- and time-saving AI systems to parse. Fortunately, researchers at New York University and Princeton have developed a framework that evaluates clinical notes (i.e., descriptions of symptoms, reasons for diagnoses, and radiology results) and autonomously assigns a risk score indicating whether patients will be readmitted within 30 days. They claim that the code and model parameters, which are publicly available on Github, handily outperform baselines.

“Accurately predicting readmission has clinical significance both in terms of efficiency and reducing the burden on intensive care unit doctors,” the paper’s authors wrote. “One estimate puts the financial burden of readmission at $17.9 billion dollars and the fraction of avoidable admissions at 76 percent.”

Source: AI predicts hospital readmission rates from clinical notes | VentureBeat

Nonprofit OpenAI looks at the bill to craft a Holy Grail AGI, gulps, spawns commercial arm to bag investors’ mega-bucks – the end of Open in OpenAI?

OpenAI, a leading machine-learning lab, has launched for-profit spin-off OpenAI LP – so it can put investors’ cash toward the expensive task of building artificial general intelligence.

The San-Francisco-headquartered organisation was founded in late 2015 as a nonprofit, with a mission to build, and encourage the development of, advanced neural network systems that are safe and beneficial to humanity.

It was backed by notable figures including killer-AI-fearing Elon Musk, who has since left the board, and Sam Altman, the former president of Silicon Valley VC firm Y Combinator. Altman stepped down from as YC president last week to focus more on OpenAI.

Altman is now CEO of OpenAI LP. Greg Brockman, co-founder and CTO, and Ilya Sutskever, co-founder and chief scientist, are also heading over to the commercial side and keeping their roles in the new organization. OpenAI LP stated it clearly it wants to “raise investment capital and attract employees with startup-like equity.”

There is still a nonprofit wing, imaginatively named OpenAI Nonprofit, though it is a much smaller entity considering most of its hundred or so employees have switched over to the commercial side, OpenAI LP, to reap the benefits its stock options.

“We’ve experienced firsthand that the most dramatic AI systems use the most computational power in addition to algorithmic innovations, and decided to scale much faster than we’d planned when starting OpenAI,” the lab’s management said in a statement this week. “We’ll need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.”

OpenAI refers to this odd split between OpenAI LP and OpenAI Nonprofit as a “capped-profit” company. The initial round of investors, including LinkedIn cofounder Reid Hoffman and Khosla Ventures, are in line to receive 100 times the amount they’ve invested from OpenAI LP’s profits, if everything goes to plan. Any excess funds afterwards will be handed over to the non-profit side. In order to pay back these early investors, and then some, OpenAI LP will have to therefore find ways to generate fat profits from its technologies.

The reaction to the “capped-profit” model has raised eyebrows. Several machine-learning experts told The Register they were somewhat disappointed by OpenAI’s decision. It once stood out among other AI orgs for its nonprofit status, its focus on developing machine-learning know-how independent of profit and product incentives, and its dedication to open-source research.

Now, for some, it appears to be just another profit-driven Silicon Valley startup stocked with well-paid engineers and boffins.

Source: Nonprofit OpenAI looks at the bill to craft a Holy Grail AGI, gulps, spawns commercial arm to bag investors’ mega-bucks • The Register

Researchers are training image-generating AI with fewer labels by letting the model infer the labels

Generative AI models have a propensity for learning complex data distributions, which is why they’re great at producing human-like speech and convincing images of burgers and faces. But training these models requires lots of labeled data, and depending on the task at hand, the necessary corpora are sometimes in short supply.

The solution might lie in an approach proposed by researchers at Google and ETH Zurich. In a paper published on the preprint server Arxiv.org (“High-Fidelity Image Generation With Fewer Labels“), they describe a “semantic extractor” that can pull out features from training data, along with methods of inferring labels for an entire training set from a small subset of labeled images. These self- and semi-supervised techniques together, they say, can outperform state-of-the-art methods on popular benchmarks like ImageNet.

“In a nutshell, instead of providing hand-annotated ground truth labels for real images to the discriminator, we … provide inferred ones,” the paper’s authors explained.

In one of several unsupervised methods the researchers posit, they first extract a feature representation — a set of techniques for automatically discovering the representations needed for raw data classification — on a target training dataset using the aforementioned feature extractor. They then perform cluster analysis — i.e., grouping the representations in such a way that those in the same group share more in common than those in other groups. And lastly, they train a GAN — a two-part neural network consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples — by inferring labels.

Source: Researchers are training image-generating AI with fewer labels | VentureBeat

Google launches TensorFlow Lite 1.0 for mobile and embedded devices

Google today introduced TensorFlow Lite 1.0, its framework for developers deploying AI models on mobile and IoT devices. Improvements include selective registration and quantization during and after training for faster, smaller models. Quantization has led to 4 times compression of some models.

“We are going to fully support it. We’re not going to break things and make sure we guarantee its compatibility. I think a lot of people who deploy this on phones want those guarantees,” TensorFlow engineering director Rajat Monga told VentureBeat in a phone interview.

Lite begins with training AI models on TensorFlow, then is converted to create Lite models for operating on mobile devices. Lite was first introduced at the I/O developer conference in May 2017 and in developer preview later that year.

The TensorFlow Lite team at Google also shared its roadmap for the future today, designed to shrink and speed up AI models for edge deployment, including things like model acceleration, especially for Android developers using neural nets, as well as a Keras-based connecting pruning kit and additional quantization enhancements.

Other changes on the way:

  • Support for control flow, which is essential to the operation of models like recurrent neural networks
  • CPU performance optimization with Lite models, potentially involving partnerships with other companies
  • Expand coverage of GPU delegate operations and finalize the API to make it generally available

A TensorFlow 2.0 model converter to make Lite models will be made available for developers to better understand how things wrong in the conversion process and how to fix it.

TensorFlow Lite is deployed by more than two billion devices today, TensorFlow Lite engineer Raziel Alvarez said onstage at the TensorFlow Dev Summit being held at Google offices in Sunnyvale, California.

TensorFlow Lite increasingly makes TensorFlow Mobile obsolete, except for users who want to utilize it for training, but a solution is in the works, Alvarez said.

Source: Google launches TensorFlow Lite 1.0 for mobile and embedded devices | VentureBeat

Google’s DeepMind can predict wind energy income a day in advance

Wind power has become increasingly popular, but its success is limited by the fact that wind comes and goes as it pleases, making it hard for power grids to count on the renewable energy and less likely to fully embrace it. While we can’t control the wind, Google has an idea for the next best thing: using machine learning to predict it.

Google and DeepMind have started testing machine learning on Google’s own wind turbines, which are part of the company’s renewable energy projects. Beginning last year, they fed weather forecasts and existing turbine data into DeepMind’s machine learning platform, which churned out wind power predictions 36 hours ahead of actual power generation. Google could then make supply commitments to power grids a full day before delivery. That predictability makes it easier and more appealing for energy grids to depend on wind power, and as a result, it boosted the value of Google’s wind energy by roughly 20 percent.

Not only does this tease to how machine learning could boost the adoption of wind energy, it’s also an example of machine learning being put to good use — solving critical problems and not just jumping into your text thread to recommend a restaurant when you start talking about tapas. For DeepMind, it’s a high-profile use of its technology and proof that it’s not only useful for beating up professional StarCraft II players.

Source: Google’s DeepMind can predict wind patterns a day in advance

IBM Brings Watson AI To The Private Cloud And Rival Public Cloud Platforms

IBM Watson Anywhere is built on top of Kubernetes, the open source orchestration engine that can be deployed in diverse environments. Since the Watson Anywhere platform is built as a set of microservices designed to run on Kubernetes, it is flexible and portable.

[…]

According to IBM, the microservices-based Watson Anywhere delivers two solutions –

Watson OpenScale: IBM’s open AI platform for managing multiple instances of AI, no matter where they were developed – including the ability to explain how AI decisions are being made in real time, for greater transparency and compliance.

Watson Assistant: IBM’s AI tool for building conversational interfaces into applications and devices. More advanced than a traditional chatbot, Watson Assistant intelligently determines when to search for a result, when to ask the user for clarification, and when to offload the user to a human for personal assistance. Also, the Watson Assistant Discovery Extension enables organizations to unlock hidden insights in unstructured data and documents.

IBM Cloud Private for Data is an extension of the hybrid cloud focused on data and analytics. According to IBM, it simplifies and unifies how customers collect, organize and analyze data to accelerate the value of data science and AI. The multi-cloud platform delivers a broad range of core data microservices, with the option to add more from a growing services catalog.

IBM Watson Anywhere is seamlessly integrated with Cloud Private for Data. The combination enables customers to manage end-to-end data workflows to help ensure that data is easily accessible for AI.

Source: IBM Brings Watson AI To The Private Cloud And Rival Public Cloud Platforms

This Person Does Not Exist Is the Best One-Off Website of 2019

At a glance, the images featured on the website This Person Does Not Exist might seem like random high school portraits or vaguely inadvisable LinkedIn headshots. But every single photo on the site has been created by using a special kind of artificial intelligence algorithm called generative adversarial networks (GANs).

Every time the site is refreshed, a shockingly realistic — but totally fake —picture of a person’s face appears. Uber software engineer Phillip Wang created the page to demonstrate what GANs are capable of, and then posted it to the public Facebook group “Artificial Intelligence & Deep Learning” on Tuesday.

The underlying code that made this possible, titled StyleGAN, was written by Nvidia and featured in a paper that has yet to be peer-reviewed. This exact type of neural network has the potential to revolutionize video game and 3D-modeling technology, but, as with almost any kind of technology, it could also be used for more sinister purposes. Deepfakes, or computer-generated images superimposed on existing pictures or videos, can be used to push fake news narratives or other hoaxes. That’s precisely why Wang chose to create the mesmerizing but also chilling website.

Source: This Person Does Not Exist Is the Best One-Off Website of 2019 | Inverse

Tencent-backed AI firm aims to free up parents and teachers from checking children’s maths homework – and analyses most common mistakes countrywide

a Beijing-based online education start-up has developed an artificial intelligence-powered maths app that can check children’s arithmetic problems through the simple snap of a photo. Based on the image and its internal database, the app automatically checks whether the answers are right or wrong.

Known as Xiaoyuan Kousuan, the free app launched by the Tencent Holdings-backed online education firm Yuanfudao, has gained increasing popularity in China since its launch a year ago and claims to have checked an average of 70 million arithmetic problems per day, saving users around 40,000 hours of time in total.

Yuanfudao is also trying to build the country’s biggest education-related database generated from the everyday experiences of real students. Using this, the six-year-old company – which has a long line of big-name investors including Warburg Pincus, IDG Capital and Matrix Partners China – aims to reinvent how children are taught in China.

“By checking nearly 100 million problems every day, we have developed a deep understanding of the kind of mistakes students make when facing certain problems,” said Li Xin, co-founder of Yuanfudao – which means “ape tutor” in Chinese – in a recent interview. “The data gathered through the app can serve as a pillar for us to provide better online education courses.”

Source: Tencent-backed AI firm aims to free up parents and teachers from checking children’s maths homework | South China Morning Post