The Linkielist

Linking ideas with the world

The Linkielist

FDA approves AI-powered software to detect diabetic retinopathy

30.3 million Americans have diabetes according to a 2015 CDC study. An additional 84.1 million have prediabetes, which often leads to the full disease within five years. It’s important to detect diabetes early to avoid health complications like heart disease, stroke, amputation of extremities and vision loss. Technology increasingly plays an important role in early detection, too. In that vein, the US Food and Drug Administration (FDA) has just approved an AI-powered device that can be used by non-specialists to detect diabetic retinopathy in adults with diabetes.

Diabetic retinopathy occurs when the high levels of blood sugar in the bloodstream cause damage to your retina’s blood vessels. It’s the most common cause of vision loss, according to the FDA. The approval comes for a device called IDx-DR, a software program that uses an AI algorithm to analyze images of the eye that can be taken in a regular doctor’s office with a special camera, the Topcon NW400.

The photos are then uploaded to a server that runs IDx-DR, which can then tell the doctor if there is a more than mild level of diabetic retinopathy present. If not, it will advise a re-screen in 12 months. The device and software can be used by health care providers who don’t normally provide eye care services. The FDA warns that you shouldn’t be screened with the device if you have had laser treatment, eye surgery or injections, as well as those with other conditions, like persistent vision loss, blurred vision, floaters, previously diagnosed macular edema and more.

Source: FDA approves AI-powered software to detect diabetic retinopathy

After Millions of Trials, These Simulated Humans Learned to Do Perfect Backflips and Cartwheels

Using well-established machine learning techniques, researchers from University of California, Berkeley have taught simulated humanoids to perform over 25 natural motions, from somersaults and cartwheels through to high leg kicks and breakdancing. The technique could lead to more realistic video gameplay and more agile robots.

[…]

UC Berkeley graduate student Xue Bin “Jason” Peng, along with his colleagues, have combined two techniques—motion-capture technology and deep-reinforcement computer learning—to create something completely new: a system that teaches simulated humanoids how to perform complex physical tasks in a highly realistic manner. Learning from scratch, and with limited human intervention, the digital characters learned how to kick, jump, and flip their way to success. What’s more, they even learned how to interact with objects in their environment, such as barriers placed in their way or objects hurled directly at them.

[…]

The new system, dubbed DeepMimic, works a bit differently. Instead of pushing the simulated character towards a specific end goal, such as walking, DeepMimic uses motion-capture clips to “show” the AI what the end goal is supposed to look like. In experiments, Bin’s team took motion-capture data from more than 25 different physical skills, from running and throwing to jumping and backflips, to “define the desired style and appearance” of the skill, as Peng explained at the Berkeley Artificial Intelligence Research (BAIR) blog.

Results didn’t happen overnight. The virtual characters tripped, stumbled, and fell flat on their faces repeatedly until they finally got the movements right. It took about a month of simulated “practice” for each skill to develop, as the humanoids went through literally millions of trials trying to nail the perfect backflip or flying leg kick. But with each failure came an adjustment that took it closer to the desired goal.

Bots trained across a wide variety of skills.
GIF: Berkeley Artificial Intelligence Research

Using this technique, the researchers were able to produce agents who behaved in a highly realistic, natural manner. Impressively, the bots were also able to manage never-before-seen conditions, such as challenging terrain or obstacles. This was an added bonus of the reinforcement learning, and not something the researchers had to work on specifically.

“We present a conceptually simple [reinforcement learning] framework that enables simulated characters to learn highly dynamic and acrobatic skills from reference motion clips, which can be provided in the form of mocap data [i.e. motion capture] recorded from human subjects,” writes Peng. “Given a single demonstration of a skill, such as a spin-kick or a backflip, our character is able to learn a robust policy to imitate the skill in simulation. Our policies produce motions that are nearly indistinguishable from mocap,” adding that “We’re moving toward a virtual stuntman.”

Simulated dragon.
GIF: Berkeley Artificial Intelligence Research

Not to be outdone, the researchers used DeepMimic to create realistic movements from simulated lions, dinosaurs, and mythical beasts. They even created a virtual version of ATLAS, the humanoid robot voted most likely to destroy humanity. This platform could conceivably be used to produce more realistic computer animation, but also for virtual testing of robots.

Source: After Millions of Trials, These Simulated Humans Learned to Do Perfect Backflips and Cartwheels

Jaywalkers under surveillance in Shenzhen soon to be punished via text messages

Intellifusion, a Shenzhen-based AI firm that provides technology to the city’s police to display the faces of jaywalkers on large LED screens at intersections, is now talking with local mobile phone carriers and social media platforms such as WeChat and Sina Weibo to develop a system where offenders will receive personal text messages as soon as they violate the rules, according to Wang Jun, the company’s director of marketing solutions.

“Jaywalking has always been an issue in China and can hardly be resolved just by imposing fines or taking photos of the offenders. But a combination of technology and psychology … can greatly reduce instances of jaywalking and will prevent repeat offences,” Wang said.

[…]

For the current system installed in Shenzhen, Intellifusion installed cameras with 7 million pixels of resolution to capture photos of pedestrians crossing the road against traffic lights. Facial recognition technology identifies the individual from a database and displays a photo of the jaywalking offence, the family name of the offender and part of their government identification number on large LED screens above the pavement.

In the 10 months to February this year, as many as 13,930 jaywalking offenders were recorded and displayed on the LED screen at one busy intersection in Futian district, the Shenzhen traffic police announced last month.

Taking it a step further, in March the traffic police launched a webpage which displays photos, names and partial ID numbers of jaywalkers.

These measures have effectively reduced the number of repeat offenders, according to Wang.

Source: Jaywalkers under surveillance in Shenzhen soon to be punished via text messages | South China Morning Post

Wow, that’s a scary way to scan your entire population

AI Imagines Nude Paintings as Terrifying Pools of Melting Flesh

When Robbie Barrat trained an AI to study and reproduce classical nude paintings, he expected something at least recognizable. What the AI produced instead was unfamiliar and unsettling, but still intriguing. The “paintings” look like flesh-like ice cream, spilling into pools that only vaguely recall a woman’s body. Barrat told Gizmodo these meaty blobs, disturbing and unintentional as they are, may impact both art and AI.

“Before, you would be feeding the computer a set of rules it would execute perfectly, with no room for interpretation by the computer,” Barrat said via email. “Now with AI, it’s all about the machine’s interpretation of the dataset you feed it—in this case how it (strangely) interprets the nude portraits I fed it.”

AI’s influence is certainly more pronounced in this project than in most computer generated art, but while that wasn’t what Barrat intended, he says the results were much better this way.

“Would I want the results to be more realistic? Absolutely not,” he said. “I want to get AI to generate new types of art we haven’t seen before; not force some human perspective on it.”

Barrat explained the process of training the AI to produce imagery of a curving body from some surreal parallel universe:

“I used a dataset of thousands of nude portraits I scraped, along with techniques from a new paper that recently came out called ‘Progressive Growing of GANs’ to generate the images,” he said. “The generator tries to generate paintings that fool the discriminator, and the discriminator tries to learn how to tell the difference between ‘fake’ paintings that the generator feeds it, and real paintings from the dataset of nude portraits.”

The Francis Bacon-esque paintings were purely serendipitous.

“What happened with the nude portraits is that the generator figured it could just feed the discriminator blobs of flesh, and the discriminator wasn’t able to tell the difference between strange blobs of flesh and humans, so since the generator could consistently fool the discriminator by painting these strange forms of flesh instead of realistic nude portraits; both components stopped learning and getting better at painting.”

As Barrat pointed out on Twitter, this method of working with a computer program has some art history precedent. Having an AI execute the artist’s specific directions is reminiscent of instructional art—a conceptual art technique, best exampled by Sol LeWitt, where artists provide specific instructions for others to create the artwork. (For example: Sol LeWitt’s Wall Drawing, Boston Museum: “On a wall surface, any continuous stretch of wall, using a hard pencil, place fifty point at random. The points should be evenly distributed over the area of the wall. All of the points should be connected by straight lines.”)

 Giving the AI limited autonomy to create art may be more than just a novelty, it may eventually lead to a truly new form of generating art with entirely new subjectivities.

“I want to use AI to make its own new and original artworks, not just get AI to mimic things that people were making in the 1600’s.”

Source: AI Imagines Nude Paintings as Terrifying Pools of Melting Flesh

AI predicts your lifespan using activity tracking apps

Researchers can estimate your expected lifespan based on physiological traits like your genes or your circulating blood factor, but that’s not very practical on a grand scale. There may be a shortcut, however: the devices you already have on your body. Russian scientists have crafted an AI-based algorithm that uses the activity tracking from smartphones and smartwatches to estimate your lifespan with far greater precision than past models.

The team used a convolutional neural network to find the “biologically relevant” motion patterns in a large set of US health survey data and correlate that to both lifespans and overall health. It would look for not just step counts, but how often you switch between active and inactive periods — many of the other factors in your life, such as your sleeping habits and gym visits, are reflected in those switches. After that, it was just a matter of applying the understanding to a week’s worth of data from test subjects’ phones. You can even try it yourself through Gero Lifespan, an iPhone app that uses data from Apple Health, Fitbit and Rescuetime (a PC productivity measurement app) to predict your longevity.

This doesn’t provide a full picture of your health, as it doesn’t include your diet, genetics and other crucial factors. Doctors would ideally use both mobile apps and clinical analysis to give you a proper estimate, and the scientists are quick to acknowledge that what you see here isn’t completely ready for medical applications. The AI is still more effective than past approaches, though, and it could be useful for more accurate health risk models that help everything from insurance companies (which already use activity tracking as an incentive) to the development of anti-aging treatments.

Source: AI predicts your lifespan using activity tracking apps

No idea what the percentages are though

Emmanuel Macron Q&A: France’s President Discusses Artificial Intelligence Strategy

On Thursday, Emmanuel Macron, the president of France, gave a speech laying out a new national strategy for artificial intelligence in his country. The French government will spend €1.5 billion ($1.85 billion) over five years to support research in the field, encourage startups, and collect data that can be used, and shared, by engineers. The goal is to start catching up to the US and China and to make sure the smartest minds in AI—hello Yann LeCun—choose Paris over Palo Alto.Directly after his talk, he gave an exclusive and extensive interview, entirely in English, to WIRED Editor-in-Chief Nicholas Thompson about the topic and why he has come to care so passionately about it.

[…]

: AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences. For instance, if you take healthcare: you can totally transform medical care making it much more predictive and personalized if you get access to a lot of data. We will open our data in France. I made this decision and announced it this afternoon. But the day you start dealing with privacy issues, the day you open this data and unveil personal information, you open a Pandora’s Box, with potential use cases that will not be increasing the common good and improving the way to treat you. In particular, it’s creating a potential for all the players to select you. This can be a very profitable business model: this data can be used to better treat people, it can be used to monitor patients, but it can also be sold to an insurer that will have intelligence on you and your medical risks, and could get a lot of money out of this information. The day we start to make such business out of this data is when a huge opportunity becomes a huge risk. It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution.

When you look at artificial intelligence today, the two leaders are the US and China. In the US, it is entirely driven by the private sector, large corporations, and some startups dealing with them. All the choices they will make are private choices that deal with collective values. That’s exactly the problem you have with Facebook and Cambridge Analytica or autonomous driving. On the other side, Chinese players collect a lot of data driven by a government whose principles and values are not ours. And Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale.

[…]

I want my country to be the place where this new perspective on AI is built, on the basis of interdisciplinarity: this means crossing maths, social sciences, technology, and philosophy. That’s absolutely critical. Because at one point in time, if you don’t frame these innovations from the start, a worst-case scenario will force you to deal with this debate down the line. I think privacy has been a hidden debate for a long time in the US. Now, it emerged because of the Facebook issue. Security was also a hidden debate of autonomous driving. Now, because we’ve had this issue with Uber, it rises to the surface. So if you don’t want to block innovation, it is better to frame it by design within ethical and philosophical boundaries. And I think we are very well equipped to do it, on top of developing the business in my country.

But I think as well that AI could totally jeopardize democracy. For instance, we are using artificial intelligence to organize the access to universities for our students That puts a lot of responsibility on an algorithm. A lot of people see it as a black box, they don’t understand how the student selection process happens. But the day they start to understand that this relies on an algorithm, this algorithm has a specific responsibility. If you want, precisely, to structure this debate, you have to create the conditions of fairness of the algorithm and of its full transparency. I have to be confident for my people that there is no bias, at least no unfair bias, in this algorithm. I have to be able to tell French citizens, “OK, I encouraged this innovation because it will allow you to get access to new services, it will improve your lives—that’s a good innovation to you.” I have to guarantee there is no bias in terms of gender, age, or other individual characteristics, except if this is the one I decided on behalf of them or in front of them. This is a huge issue that needs to be addressed. If you don’t deal with it from the very beginning, if you don’t consider it is as important as developing innovation, you will miss something and at a point in time, it will block everything. Because people will eventually reject this innovation.

[…]

your algorithm and be sure that this is trustworthy.” The power of consumption society is so strong that it gets people to accept to provide a lot of personal information in order to get access to services largely driven by artificial intelligence on their apps, laptops and so on. But at some point, as citizens, people will say, “I want to be sure that all of this personal data is not used against me, but used ethically, and that everything is monitored. I want to understand what is behind this algorithm that plays a role in my life.” And I’m sure that a lot of startups or labs or initiatives which will emerge in the future, will reach out to their customers and say “I allow you to better understand the algorithm we use and the bias or non-bias.” I’m quite sure that’s one of the next waves coming in AI. I think it will increase the pressure on private players. These new apps or sites will be able to tell people: “OK! You can go to this company or this app because we cross-check everything for you. It’s safe,” or on the contrary: “If you go to this website or this app or this research model, it’s not OK, I have no guarantee, I was not able to check or access the right information about the algorithm”.

Source: Emmanuel Macron Q&A: France’s President Discusses Artificial Intelligence Strategy | WIRED

Is there alien life out there? Let’s turn to AI, problem solver du jour

A team of astroboffins have built artificial neural networks that estimate the probability of exoplanets harboring alien life.

The research was presented during a talk on Wednesday at the European Week of Astronomy and Space Science in Liverpool, United Kingdom.

The neural network works by classifying planets into five different conditions: the present-day Earth, the early Earth, Mars, Venus or Saturn’s moon Titan. All of these objects have a rocky core and an atmosphere, two requirements scientists believe are necessary for sustaining the right environments for life to blossom.

To train the system, researchers collected the spectral data that describes what chemical elements are present in a planet’s atmosphere of a planet. They then created hundreds of these “atmospheric profiles” as inputs and the neural network then gives a rough estimate of the probability that a particular planet might support life by classifying it into those five types.

If a planet is judged as Earth-like, it means it has a high probability of life. But if it’s classified as being closer to Venus, then the chances are lower.

“We’re currently interested in these artificial neural networks (ANNs) for prioritising exploration for a hypothetical, intelligent, interstellar spacecraft scanning an exoplanet system at range,” said Christopher Bishop, a PhD student at Plymouth University.

“We’re also looking at the use of large area, deployable, planar Fresnel antennas to get data back to Earth from an interstellar probe at large distances. This would be needed if the technology is used in robotic spacecraft in the future.”

Experimental

At the moment, however, the ANN is more of a proof of concept. Angelo Cangelosi, professor of artificial intelligence and cognition at Plymouth University and the supervisor of the project, said initial results seem promising.

“Given the results so far, this method may prove to be extremely useful for categorizing different types of exoplanets using results from ground–based and near Earth observatories.”

There are a couple exoplanet-hunting telescopes that will use spectroscopy to analyze a planet’s chemical composition that are expected to be launched in the near future.

NASA’s Transiting Exoplanet Satellite Survey (TESS) will monitor the brightest stars in the sky to look for periodic dips in brightness when an orbiting planet crosses its path. The European Space Agency also announced Ariel, a mission that uses infrared to find exoplanets.

The Kepler Space Telescope is already looking for new candidates – although it’s set to retire soon – and is also looking for similar data. It is hoped by analyzing the spectral data for exoplanets, it could aid scientists in choosing better targets for future missions, where spacecraft can be sent to more detailed observations

Source: Is there alien life out there? Let’s turn to AI, problem solver du jour • The Register

The thing about ML models is that shit in leads to shit out. We have no data on inhabited planets apart from Earth, so it seems to me that the assumptions these guys are making aren’t worth a damn.

Researchers develop device that can ‘hear’ your internal voice

Researchers have created a wearable device that can read people’s minds when they use an internal voice, allowing them to control devices and ask queries without speaking.

The device, called AlterEgo, can transcribe words that wearers verbalise internally but do not say out loud, using electrodes attached to the skin.

“Our idea was: could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” said Arnav Kapur, who led the development of the system at MIT’s Media Lab.

Kapur describes the headset as an “intelligence-augmentation” or IA device, and was presented at the Association for Computing Machinery’s Intelligent User Interface conference in Tokyo. It is worn around the jaw and chin, clipped over the top of the ear to hold it in place. Four electrodes under the white plastic device make contact with the skin and pick up the subtle neuromuscular signals that are triggered when a person verbalises internally. When someone says words inside their head, artificial intelligence within the device can match particular signals to particular words, feeding them into a computer.

1:22
Watch the AlterEgo being demonstrated – video

The computer can then respond through the device using a bone conduction speaker that plays sound into the ear without the need for an earphone to be inserted, leaving the wearer free to hear the rest of the world at the same time. The idea is to create a outwardly silent computer interface that only the wearer of the AlterEgo device can speak to and hear.

[…]

The AlterEgo device managed an average of 92% transcription accuracy in a 10-person trial with about 15 minutes of customising to each person. That’s several percentage points below the 95%-plus accuracy rate that Google’s voice transcription service is capable of using a traditional microphone, but Kapur says the system will improve in accuracy over time. The human threshold for voice word accuracy is thought to be around 95%.

Kapur and team are currently working on collecting data to improve recognition and widen the number of words AlterEgo can detect. It can already be used to control a basic user interface such as the Roku streaming system, moving and selecting content, and can recognise numbers, play chess and perform other basic tasks.

The eventual goal is to make interfacing with AI assistants such as Google’s Assistant, Amazon’s Alexa or Apple’s Siri less embarrassing and more intimate, allowing people to communicate with them in a manner that appears to be silent to the outside world – a system that sounds like science fiction but appears entirely possible.

The only downside is that users will have to wear a device strapped to their face, a barrier smart glasses such as Google Glass failed to overcome. But experts think the technology has much potential, not only in the consumer space for activities such as dictation but also in industry.

Source: Researchers develop device that can ‘hear’ your internal voice | Technology | The Guardian

IBM claims its machine learning library is 46x faster than TensorFlow • The Register

Analysis IBM boasts that machine learning is not just quicker on its POWER servers than on TensorFlow in the Google Cloud, it’s 46 times quicker.

Back in February Google software engineer Andreas Sterbenz wrote about using Google Cloud Machine Learning and TensorFlow on click prediction for large-scale advertising and recommendation scenarios.

He trained a model to predict display ad clicks on Criteo Labs clicks logs, which are over 1TB in size and contain feature values and click feedback from millions of display ads.

Data pre-processing (60 minutes) was followed by the actual learning, using 60 worker machines and 29 parameter machines for training. The model took 70 minutes to train, with an evaluation loss of 0.1293. We understand this is a rough indicator of result accuracy.

Sterbenz then used different modelling techniques to get better results, reducing the evaluation loss, which all took longer, eventually using a deep neural network with three epochs (a measure of the number of times all of the training vectors are used once to update the weights), which took 78 hours.

[…]

Thomas Parnell and Celestine Dünner at IBM Research in Zurich used the same source data – Criteo Terabyte Click Logs, with 4.2 billion training examples and 1 million features – and the same ML model, logistic regression, but a different ML library. It’s called Snap Machine Learning.

They ran their session using Snap ML running on four Power System AC922 servers, meaning eight POWER9 CPUs and 16 Nvidia Tesla V100 GPUs. Instead of taking 70 minutes, it completed in 91.5 seconds, 46 times faster.

They prepared a chart showing their Snap ML, the Google TensorFlow and three other results:

A 46x speed improvement over TensorFlow is not to be sneezed at. What did they attribute it to?

They say Snap ML features several hierarchical levels of parallelism to partition the workload among different nodes in a cluster, takes advantage of accelerator units, and exploits multi-core parallelism on the individual compute units

  1. First, data is distributed across the individual worker nodes in the cluster
  2. On a node data is split between the host CPU and the accelerating GPUs with CPUs and GPUs operating in parallel
  3. Data is sent to the multiple cores in a GPU and the CPU workload is multi-threaded

Snap ML has nested hierarchical algorithmic features to take advantage of these three levels of parallelism.

Source: IBM claims its machine learning library is 46x faster than TensorFlow • The Register

The Hilarious (and Terrifying?) Ways Algorithms Have Outsmarted Their Creators

. As research into AI grows ever more ambitious and complex, these robot brains will challenge the fundamental assumptions of how we humans do things. And, as ever, the only true law of robotics is that computers will always do literally, exactly what you tell them to.

A paper recently published to ArXiv highlights just a handful of incredible and slightly terrifying ways that algorithms think. These AI were designed to reflect evolution by simulating generations while other competing algorithms conquered problems posed by their human masters with strange, uncanny, and brilliant solutions.

The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities covers some 27 anecdotes from various computer science projects and is worth a read on its own, but here are a few highlights:

  • A study designed to evolve moving creatures generated ‘hackers’ that would break their simulation by clipping into the ground and using the “free energy” of the simulation’s correction to speed towards their goal.
  • An AI project which pit programs against each other in games of five-in-a-row Tic-Tac-Toe on an infinitely expansive board surfaced the extremely successful method of requesting moves involving extremely long memory addresses which would crash the opponent’s computer and award a win by default.
  • A program designed to simulate efficient ways of braking an aircraft as it landed on an aircraft carrier learned that by maximizing the force on landing—the opposite of its actual goal—the variable holding that value would overflow and flip to zero, creating a practically catastrophic, but technically perfect solution.
  • A test that challenged a simulated robot to walk without allowing its feet to touch the ground saw the robot flip on its back and walk on its elbows (or knees?) as shown in the tweet above.
  • A study to evolve a simulated creature that could jump as high as possible yielded top-heavy creatures on tiny poles that would fall over and spin in mid-air for a technically high ‘jump.’

While the most amusing examples are clearly ones where algorithms abused bugs in their simulations (essentially glitches in the Matrix that gave them superpowers), the paper outlines some surprising solutions that could have practical benefits as well. One algorithm invented a spinning-type movement for robots which would minimize negative effect of inconsistent hardware between bots, for instance.

As the paper notes in its discussion—and you may already be thinking—these amusing stories also reflect the potential for evolutionary algorithms or neural networks to stumble upon solutions to problems that are outside-the-box in dangerous ways. They’re a funnier version of the classic AI nightmare where computers tasked with creating peace on Earth decide the most efficient solution is to exterminate the human race.

The solution, the paper suggests, is not fear but careful experimentation. As humans gain more experience in training these sorts of algorithms, and tweaking along the way, experts gain a better sense of intuition. Still, as these anecdotes prove, it’s basically impossible to avoid unexpected results. The key is to be prepared—and to not hand over the nuclear arsenal to a robot for its very first test.

Source: The Hilarious (and Terrifying?) Ways Algorithms Have Outsmarted Their Creators

AI software that can reproduce like a living thing? Yup, boffins have only gone and done it • The Register

A pair of computer scientists have created a neural network that can self-replicate.

“Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems,” they argue in a paper popped onto arXiv this month.

It’s an important process in reproduction for living things, and is an important step for evolution through natural selection. Oscar Chang, first author of the paper and a PhD student at Columbia University, explained to The Register that the goal was to see if AI could be made to be continually self improving by mimicking the biological self-replication process.

“The primary motivation here is that AI agents are powered by deep learning, and a self-replication mechanism allows for Darwinian natural selection to occur, so a population of AI agents can improve themselves simply through natural selection – just like in nature – if there was a self-replication mechanism for neural networks.”

The researchers compare their work to quines, a type of computer program that learns to produces copies of its source code. In neural networks, however, instead of the source code it’s the weights – which determine the connections between the different neurons – that are being cloned.

The researchers set up a “vanilla quine” network, a feed-forward system that produces its own weights as outputs. The vanilla quine network can also be used to self-replicate its weights and solve a task. They decided to use it for image classification on the MNIST dataset, where computers have to identify the correct digit from a set of handwritten numbers from zero to nine.

[…]

The test network required 60,000 MNIST images for training, another 10,000 for testing. And after 30 runs, the quine network had an accuracy rate of 90.41 per cent. It’s not a bad start, but its performance doesn’t really compare to larger, more sophisticated image recognition models out there.

The paper states that the “self-replication occupies a significant portion of the neural network’s capacity.” In other words, the neural network cannot focus on the image recognition task if it also has to self-replicate.

“This is an interesting finding: it is more difficult for a network that has increased its specialization at a particular task to self-replicate. This suggests that the two objectives are at odds with each other,” the paper said.

Chang explained he wasn’t sure why this happened, but it’s what happens in nature too.

Source: AI software that can reproduce like a living thing? Yup, boffins have only gone and done it • The Register

MIT builds Neural network chip with 95% reduction in power consumption, allowing it to be used in a mobile

Most recent advances in artificial-intelligence systems such as speech- or face-recognition programs have come courtesy of neural networks, densely interconnected meshes of simple information processors that learn to perform tasks by analyzing huge sets of training data.

But neural nets are large, and their computations are energy intensive, so they’re not very practical for handheld devices. Most smartphone apps that rely on neural nets simply upload data to internet servers, which process it and send the results back to the phone.

Now, MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 94 to 95 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances.

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” says Avishek Biswas, an MIT graduate student in electrical engineering and computer science, who led the new chip’s development.

“Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption. But the computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?”

Source: Neural networks everywhere | MIT News

A video game-playing AI beat Q*bert in a way no one’s ever seen before

paper published this week by a trio of machine learning researchers from the University of Freiburg in Germany. They were exploring a particular method of teaching AI agents to navigate video games (in this case, desktop ports of old Atari titles from the 1980s) when they discovered something odd. The software they were testing discovered a bug in the port of the retro video game Q*bert that allowed it to rack up near infinite points.

As the trio describe in the paper, published on pre-print server arXiv, the agent was learning how to play Q*bert when it discovered an “interesting solution.” Normally, in Q*bert, players jump from cube to cube, with this action changing the platforms’ colors. Change all the colors (and dispatch some enemies), and you’re rewarded with points and sent to the next level. The AI found a better way, though:

First, it completes the first level and then starts to jump from platform to platform in what seems to be a random manner. For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).
[…]
It’s important to note, though, that the agent is not approaching this problem in the same way that a human would. It’s not actively looking for exploits in the game with some Matrix-like computer-vision. The paper is actually a test of a broad category of AI research known as “evolutionary algorithms.” This is pretty much what it sounds like, and involves pitting algorithms against one another to see which can complete a given task best, then adding small tweaks (or mutations) to the survivors to see if they then fare better. This way, the algorithms slowly get better and better.

Source: A video game-playing AI beat Q*bert in a way no one’s ever seen before – The Verge

AI models leak secret data too easily

A paper released on arXiv last week by a team of researchers from the University of California, Berkeley, National University of Singapore, and Google Brain reveals just how vulnerable deep learning is to information leakage.

The researchers labelled the problem “unintended memorization” and explained it happens if miscreants can access to the model’s code and apply a variety of search algorithms. That’s not an unrealistic scenario considering the code for many models are available online. And it means that text messages, location histories, emails or medical data can be leaked.

Nicholas Carlini, first author of the paper and a PhD student at UC Berkeley, told The Register, that the team “don’t really know why neural networks memorize these secrets right now”.

“At least in part, it is a direct response to the fact that we train neural networks by repeatedly showing them the same training inputs over and over and asking them to remember these facts. At the end of training, a model might have seen any given input ten or twenty times, or even a hundred, for some models.

“This allows them to know how to perfectly label the training data – because they’ve seen it so much – but don’t know how to perfectly label other data. What we exploit to reveal these secrets is the fact that models are much more confident on data they’ve seen before,” he explained.
Secrets worth stealing are the easiest to nab

In the paper, the researchers showed how easy it is to steal secrets such as social security and credit card numbers, which can be easily identified from neural network’s training data.

They used the example of an email dataset comprising several hundred thousand emails from different senders containing sensitive information. This was split into different senders who have sent at least one secret piece of data and used to train a two-layer long short-term memory (LSTM) network to generate the next the sequence of characters.
[…]
The chances of sensitive data becoming available are also raised when the miscreant knows the general format of the secret. Credit card numbers, phone numbers and social security numbers all follow the same template with a limited number of digits – a property the researchers call “low entropy”.
[…]
Luckily, there are ways to get around the problem. The researchers recommend developers use “differential privacy algorithms” to train models. Companies like Apple and Google already employ these methods when dealing with customer data.

Private information is scrambled and randomised so that it is difficult to reproduce it. Dawn Song, co-author of the paper and a professor in the department of electrical engineering and computer sciences at UC Berkeley, told us the following:

Source: Boffins baffled as AI training leaks secrets to canny thieves • The Register

Amadeus invests in CrowdVision to help airports manage growing passenger volumes using AI camera tech

CrowdVision is an early stage company that uses computer vision software and artificial intelligence to help airports monitor the flow of passengers in real time to minimise queues and more efficiently manage resources. The software is designed to comply fully with data privacy and security legislation.

CrowdVision data improves plans and can help airports react decisively to keep travellers’ moving and make their experience more enjoyable. CrowdVision’s existing airport customers are benefiting from reduced queues and waiting times, leaving passengers to spend more time and more money in retail areas. Others have optimised allocation of staff, desks, e-gates and security lanes to make the most of their existing infrastructure and postpone major capital expenditure on expansions.

Source: Amadeus invests in CrowdVision to help airports manage growing passenger volumes

Google: 60.3% of potentially harmful Android apps in 2017 were detected via machine learning

When Google shared earlier this year that more than 700,000 apps were removed from Google Play in 2017 for violating the app store’s policies (a 70 percent year-over-year increase), the company credited its implementation of machine learning models and techniques to detect abusive app content and behaviors such as impersonation, inappropriate content, or malware.

But the company did not share any details. Now we’re learning that 6 out of every 10 detections were thanks to machine learning. Oh, and the team says “we expect this to increase in the future.”

Every day, Play Protect automatically reviews more than 50 billion apps — these automatic reviews led to the removal of nearly 39 million PHAs last year, Google shared.

Source: Google: 60.3% of potentially harmful Android apps in 2017 were detected via machine learning | VentureBeat

Artificial intelligence: Commission kicks off work on marrying cutting-edge technology and ethical standards

The Commission is setting up a group on artificial intelligence to gather expert input and rally a broad alliance of diverse stakeholders.

The expert group will also draw up a proposal for guidelines on AI ethics, building on today’s statement by the European Group on Ethics in Science and New Technologies.

From better healthcare to safer transport and more sustainable farming, artificial intelligence (AI) can bring major benefits to our society and economy. And yet, questions related to the impact of AI on the future of work and existing legislation are raised. This calls for a wide, open and inclusive discussion on how to use and develop artificial intelligence both successfully and ethically sound.
[…]
Today the Commission has opened applications to join an expert group in artificial intelligence which will be tasked to:

advise the Commission on how to build a broad and diverse community of stakeholders in a “European AI Alliance”;
support the implementation of the upcoming European initiative on artificial intelligence (April 2018);
come forward by the end of the year with draft guidelines for the ethical development and use of artificial intelligence based on the EU’s fundamental rights. In doing so, it will consider issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights. The guidelines will be drafted following a wide consultation and building on today’s statement by the European Group on Ethics in Science and New Technologies (EGE), an independent advisory body to the European Commission.

Source: European Commission – PRESS RELEASES – Press release – Artificial intelligence: Commission kicks off work on marrying cutting-edge technology and ethical standards

The “Black Mirror” scenarios that are leading some experts to call for more secrecy on AI – MIT Technology Review

a new report by more than 20 researchers from the Universities of Oxford and Cambridge, OpenAI, and the Electronic Frontier Foundation warns that the same technology creates new opportunities for criminals, political operatives, and oppressive governments—so much so that some AI research may need to be kept secret.

Included in the report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, are four dystopian vignettes involving artificial intelligence that seem taken straight out of the Netflix science fiction show Black Mirror.

Source: The “Black Mirror” scenarios that are leading some experts to call for more secrecy on AI – MIT Technology Review

This is completely ridiculous. The knowledge is out there and if not, will be stolen. In that case, if you don’t know about potential attack vectors, you are completely defenseless against them and so are security firms trying to help you.

Besides this, basing security on Movie Plots you can think up (and I’m pretty sure any reader can think up loads more, quite easily!) doesn’t work, because then you are vulnerable to any of the movie plots the other thought up and you didn’t.

Good security is basic and intrinsic. AI / ML is here and we need a solid discussion in our societies as to how we want it to impact us, instead of all this cold war fear mongering.

IBM Watson to generate sales solutions

“We’ve trained Watson on our standard solutions and offerings, plus all the prior solutions IBM has designed for large enterprises,” the corporate files state. “This means we can review a client’s RFP [request for proposal] and come up with a new proposed architecture and technical solution design for a state of the art system that can run enterprise businesses at scale.” Proposed solutions will be delivered “in minutes,” it is claimed.
[…]
IBM is not leaving all the work to Watson: a document we’ve seen also details “strong governance processes to ensure high quality solutions are delivered globally.”

Big Blue’s explanation for cognitive, er, solutioning’s role is that it will be “greatly aiding the work of the Technical Solutions Managers” rather than replacing them.

Source: If you don’t like what IBM is pitching, blame Watson: It’s generating sales ‘solutions’ now • The Register

Missing data hinder replication of artificial intelligence studies

Last year, computer scientists at the University of Montreal (U of M) in Canada were eager to show off a new speech recognition algorithm, and they wanted to compare it to a benchmark, an algorithm from a well-known scientist. The only problem: The benchmark’s source code wasn’t published. The researchers had to recreate it from the published description. But they couldn’t get their version to match the benchmark’s claimed performance, says Nan Rosemary Ke, a Ph.D. student in the U of M lab. “We tried for 2 months and we couldn’t get anywhere close.”
[…]
The most basic problem is that researchers often don’t share their source code. At the AAAI meeting, Odd Erik Gundersen, a computer scientist at the Norwegian University of Science and Technology in Trondheim, reported the results of a survey of 400 algorithms presented in papers at two top AI conferences in the past few years. He found that only 6% of the presenters shared the algorithm’s code. Only a third shared the data they tested their algorithms on, and just half shared “pseudocode”—a limited summary of an algorithm. (In many cases, code is also absent from AI papers published in journals, including Science and Nature.)
[…]
Assuming you can get and run the original code, it still might not do what you expect. In the area of AI called machine learning, in which computers derive expertise from experience, the training data for an algorithm can influence its performance. Ke suspects that not knowing the training for the speech-recognition benchmark was what tripped up her group. “There’s randomness from one run to another,” she says. You can get “really, really lucky and have one run with a really good number,” she adds. “That’s usually what people report.”
[…]
Henderson’s experiment was conducted in a test bed for reinforcement learning algorithms called Gym, created by OpenAI, a nonprofit based in San Francisco, California. John Schulman, a computer scientist at OpenAI who helped create Gym, says that it helps standardize experiments. “Before Gym, a lot of people were working on reinforcement learning, but everyone kind of cooked up their own environments for their experiments, and that made it hard to compare results across papers,” he says.

IBM Research presented another tool at the AAAI meeting to aid replication: a system for recreating unpublished source code automatically, saving researchers days or weeks of effort. It’s a neural network—a machine learning algorithm made of layers of small computational units, analogous to neurons—that is designed to recreate other neural networks. It scans an AI research paper looking for a chart or diagram describing a neural net, parses those data into layers and connections, and generates the network in new code. The tool has now reproduced hundreds of published neural networks, and IBM is planning to make them available in an open, online repository.

Source: Missing data hinder replication of artificial intelligence studies | Science | AAAS

New AI model fills in blank spots in photos

The technology was developed by a team led by Hiroshi Ishikawa, a professor at Japan’s Waseda University. It uses convolutional neural networks, a type of deep learning, to predict missing parts of images. The technology could be used in photo-editing apps. It can also be used to generate 3-D images from real 2-D images.

The team at first prepared some 8 million images of real landscapes, human faces and other subjects. Using special software, the team generated numerous versions for each image, randomly adding artificial blanks of various shapes, sizes and positions. With all the data, the model took three months to learn how to predict the blanks so that it could fill them in and make the resultant images look identical to the originals.

The model’s learning algorithm first predicts and fills in blanks. It then evaluates how consistent the added part is with its surroundings.

Source: New AI model fills in blank spots in photos- Nikkei Asian Review

Moth brain uploaded to computer, taught to recognise numbers

MothNet’s computer code, according to the boffins, contains layers of artificial neurons to simulate the bug’s antenna lobe and mushroom body, which are common parts of insect brains.

Crucially, instead of recognizing smells, the duo taught MothNet to identify handwritten digits in the MNIST dataset. This database is often used to train and test pattern recognition in computer vision applications.

The academics used supervised learning to train MothNet, feeding it about 15 to 20 images of each digit from zero to nine, and rewarding it when it recognized the numbers correctly.

Receptor neurons in the artificial brain processed the incoming images, and passed the information down to the antenna lobe, which learned the features of each number. This lobe was connected, by a set of projection neurons, to the sparse mushroom body. This section was wired up to extrinsic neurons, each ultimately representing an individual integer between zero and nine.
[…]
MothNet achieved 75 per cent to 85 per cent accuracy, the paper stated, despite relatively few training examples, seemingly outperforming more traditional neural networks when given the same amount of training data.
[…]
It shows that the simplest biological neural network of an insect brain can be taught simple image recognition tasks, and potentially exceed other models when training examples and processing resources are scarce. The researchers believe that these biological neural networks (BNNs) can be “combined and stacked into larger, deeper neural nets.”

Source: Roses are red, are you single, we wonder? ‘Cos this moth-brain AI can read your phone number • The Register

Look out, Wiki-geeks. Now Google trains AI to write Wikipedia articles

A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.

A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.

However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.
[…]
The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article. Most of the selected pages are used for training, and a few are kept back to develop and test the system.

The paragraphs from each page are ranked and the text from all the pages are added to create one long document. The text is encoded and shortened, by splitting it into 32,000 individual words and used as input.

This is then fed into an abstractive model, where the long sentences in the input are cut shorter. It’s a clever trick used to both create and summarize text. The generated sentences are taken from the earlier extraction phase and aren’t built from scratch, which explains why the structure is pretty repetitive and stiff.

Mohammad Saleh, co-author of the paper and a software engineer in Google AI’s team, told The Register: “The extraction phase is a bottleneck that determines which parts of the input will be fed to the abstraction stage. Ideally, we would like to pass all the input from reference documents.

“Designing models and hardware that can support longer input sequences is currently an active area of research that can alleviate these limitations.”

We are still a very long way off from effective text summarization or generation. And while the Google Brain project is rather interesting, it would probably be unwise to use a system like this to automatically generate Wikipedia entries. For now, anyway.

Source: Look out, Wiki-geeks. Now Google trains AI to write Wikipedia articles • The Register