The Linkielist

Linking ideas with the world

The Linkielist

This AI-Controlled Roach Breeding Site Is a Nightmare Factory

In the city of Xichang, located in the southwestern Sichuan province, there is a massive, artificial intelligence-powered roach breeding farm that is producing more than six billion cockroaches per year.

The facility, which is described by the South China Morning Post as a multi-story building about the size of two sports fields, is being operated by Chengdu-based medicine maker Gooddoctor Pharmaceutical Group. Its existence raises a number of questions like, “Oh god, why?” and “Who asked for this monstrosity?”

Inside the breeding site, the environment is described as “warm, humid, and dark” all-year round. The layout is wide open, allowing the roaches to roam around freely, find food and water, and reproduce whenever and wherever the right mood strikes.

The insect sex pit is managed by what the South China Morning Post describes as a “smart manufacturing system” that is controlled primarily by algorithms. The system is in charge of analyzing more than 80 categories of data collected from throughout the facility. Everything from the temperature to the level of food consumption is monitored by AI, which is programmed to learn from historical data to determine the best conditions for peak roach fornication.

The billions of roaches that pass through the facility each year never get to see the light of day. From their birth inside the building until their death months or years later, they are locked within the walls of the moist coitus cabin.

Each and every one of the insects is eventually fed into machines and crushed up to be used in a “healing potion” manufactured by the pharmaceutical company responsible for the facility.

The potion—which is described as having a tea-like color, a slightly sweet taste, and a fishy smell—sells for about $8 for two 100ml bottles. While it is used primarily as a fix for stomach issues, the medicine can be prescribed by doctors for just about anything.

Source: This AI-Controlled Roach Breeding Site Is a Nightmare Factory

‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale

the workers wear caps to monitor their brainwaves, data that management then uses to adjust the pace of production and redesign workflows, according to the company.

The company said it could increase the overall efficiency of the workers by manipulating the frequency and length of break times to reduce mental stress.

Hangzhou Zhongheng Electric is just one example of the large-scale application of brain surveillance devices to monitor people’s emotions and other mental activities in the workplace, according to scientists and companies involved in the government-backed projects.

Concealed in regular safety helmets or uniform hats, these lightweight, wireless sensors constantly monitor the wearer’s brainwaves and stream the data to computers that use artificial intelligence algorithms to detect emotional spikes such as depression, anxiety or rage.

The technology is in widespread use around the world but China has applied it on an unprecedented scale in factories, public transport, state-owned companies and the military to increase the competitiveness of its manufacturing industry and to maintain social stability.

It has also raised concerns about the need for regulation to prevent abuses in the workplace.

The technology is also in use at in Hangzhou at State Grid Zhejiang Electric Power, where it has boosted company profits by about 2 billion yuan (US$315 million) since it was rolled out in 2014, according to Cheng Jingzhou, an official overseeing the company’s emotional surveillance programme.

“There is no doubt about its effect,” Cheng said.

Source: ‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale | South China Morning Post

Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

The gambling industry is increasingly using artificial intelligence to predict consumer habits and personalise promotions to keep gamblers hooked, industry insiders have revealed.Current and former gambling industry employees have described how people’s betting habits are scrutinised and modelled to manipulate their future behaviour.“The industry is using AI to profile customers and predict their behaviour in frightening new ways,” said Asif, a digital marketer who previously worked for a gambling company. “Every click is scrutinised in order to optimise profit, not to enhance a user’s experience.”“I’ve often heard people wonder about how they are targeted so accurately and it’s no wonder because its all hidden in the small print.”Publicly, gambling executives boast of increasingly sophisticated advertising keeping people betting, while privately conceding that some are more susceptible to gambling addiction when bombarded with these type of bespoke ads and incentives.Gamblers’ every click, page view and transaction is scientifically examined so that ads statistically more likely to work can be pushed through Google, Facebook and other platforms.

[…]

Last August, the Guardian revealed the gambling industry uses third-party companies to harvest people’s data, helping bookmakers and online casinos target people on low incomes and those who have stopped gambling.

Despite condemnation from MPs, experts and campaigners, such practices remain an industry norm.

“You can buy email lists with more than 100,000 people’s emails and phone numbers from data warehouses who regularly sell data to help market gambling promotions,” said Brian. “They say it’s all opted in but people haven’t opted in at all.”

In this way, among others, gambling companies and advertisers create detailed customer profiles including masses of information about their interests, earnings, personal details and credit history.

[…]

Elsewhere, there are plans to geolocate customers in order to identify when they arrive at stadiums so they can prompted via texts to bet on the game they are about to watch.

The gambling industry earned£14bn in 2016, £4.5bn of which from online betting, and it is pumping some of that money into making its products more sophisticated and, in effect, addictive.

Source: Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

Europe divided over robot ‘personhood’

While autonomous robots with humanlike, all-encompassing capabilities are still decades away, European lawmakers, legal experts and manufacturers are already locked in a high-stakes debate about their legal status: whether it’s these machines or human beings who should bear ultimate responsibility for their actions.

The battle goes back to a paragraph of text, buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted “electronic personalities.” Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

Those pushing for such a legal change, including some manufacturers and their affiliates, say the proposal is common sense. Legal personhood would not make robots virtual people who can get married and benefit from human rights, they say; it would merely put them on par with corporations, which already have status as “legal persons,” and are treated as such by courts around the world.

Source: Europe divided over robot ‘personhood’ – POLITICO

Google uses AI to seperate out audio from a single person in a high noise rate video

People are remarkably good at focusing their attention on a particular person in a noisy environment, mentally “muting” all other voices and sounds. Known as the cocktail party effect, this capability comes natural to us humans. However, automatic speech separation — separating an audio signal into its individual speech sources — while a well-studied problem, remains a significant challenge for computers. In “Looking to Listen at the Cocktail Party”, we present a deep learning audio-visual model for isolating a single speech signal from a mixture of sounds such as other voices and background noise. In this work, we are able to computationally produce videos in which speech of specific people is enhanced while all other sounds are suppressed. Our method works on ordinary videos with a single audio track, and all that is required from the user is to select the face of the person in the video they want to hear, or to have such a person be selected algorithmically based on context. We believe this capability can have a wide range of applications, from speech enhancement and recognition in videos, through video conferencing, to improved hearing aids, especially in situations where there are multiple people speaking.

A unique aspect of our technique is in combining both the auditory and visual signals of an input video to separate the speech. Intuitively, movements of a person’s mouth, for example, should correlate with the sounds produced as that person is speaking, which in turn can help identify which parts of the audio correspond to that person. The visual signal not only improves the speech separation quality significantly in cases of mixed speech (compared to speech separation using audio alone, as we demonstrate in our paper), but, importantly, it also associates the separated, clean speech tracks with the visible speakers in the video.

The input to our method is a video with one or more people speaking, where the speech of interest is interfered by other speakers and/or background noise. The output is a decomposition of the input audio track into clean speech tracks, one for each person detected in the video.

An Audio-Visual Speech Separation Model To generate training examples, we started by gathering a large collection of 100,000 high-quality videos of lectures and talks from YouTube. From these videos, we extracted segments with a clean speech (e.g. no mixed music, audience sounds or other speakers) and with a single speaker visible in the video frames. This resulted in roughly 2000 hours of video clips, each of a single person visible to the camera and talking with no background interference. We then used this clean data to generate “synthetic cocktail parties” — mixtures of face videos and their corresponding speech from separate video sources, along with non-speech background noise we obtained from AudioSet. Using this data, we were able to train a multi-stream convolutional neural network-based model to split the synthetic cocktail mixture into separate audio streams for each speaker in the video. The input to the network are visual features extracted from the face thumbnails of detected speakers in each frame, and a spectrogram representation of the video’s soundtrack. During training, the network learns (separate) encodings for the visual and auditory signals, then it fuses them together to form a joint audio-visual representation. With that joint representation, the network learns to output a time-frequency mask for each speaker. The output masks are multiplied by the noisy input spectrogram and converted back to a time-domain waveform to obtain an isolated, clean speech signal for each speaker. For full details, see our paper.

Our multi-stream, neural network-based model architecture.

Here are some more speech separation and enhancement results by our method, playing first the input video with mixed or noisy speech, then our results. Sound by others than the selected speakers can be entirely suppressed or suppressed to the desired level.

Application to Speech Recognition Our method can also potentially be used as a pre-process for speech recognition and automatic video captioning. Handling overlapping speakers is a known challenge for automatic captioning systems, and separating the audio to the different sources could help in presenting more accurate and easy-to-read captions.

You can similarly see and compare the captions before and after speech separation in all the other videos in this post and on our website, by turning on closed captions in the YouTube player when playing the videos (“cc” button at the lower right corner of the player). On our project web page you can find more results, as well as comparisons with state-of-the-art audio-only speech separation and with other recent audio-visual speech separation work. Indeed, with recent advances in deep learning, there is a clear growing interest in the academic community in audio-visual analysis. For example, independently and concurrently to our work, this work from UC Berkeley explored a self-supervised approach for separating speech of on/off-screen speakers, and this work from MIT addressed the problem of separating the sound of multiple on-screen objects (e.g., musical instruments), while locating the image regions from which the sound originates. We envision a wide range of applications for this technology. We are currently exploring opportunities for incorporating it into various Google products. Stay tuned!

Source: Research Blog: Looking to Listen: Audio-Visual Speech Separation

Watch artificial intelligence create a 3D model of a person—from just a few seconds of video

Transporting yourself into a video game, body and all, just got easier. Artificial intelligence has been used to create 3D models of people’s bodies for virtual reality avatars, surveillance, visualizing fashion, or movies. But it typically requires special camera equipment to detect depth or to view someone from multiple angles. A new algorithm creates 3D models using standard video footage from one angle.

The system has three stages. First, it analyzes a video a few seconds long of someone moving—preferably turning 360° to show all sides—and for each frame creates a silhouette separating the person from the background. Based on machine learning techniques—in which computers learn a task from many examples—it roughly estimates the 3D body shape and location of joints. In the second stage, it “unposes” the virtual human created from each frame, making them all stand with arms out in a T shape, and combines information about the T-posed people into one, more accurate model. Finally, in the third stage, it applies color and texture to the model based on recorded hair, clothing, and skin.

The researchers tested the method with a variety of body shapes, clothing, and backgrounds and found that it had an average accuracy within 5 millimeters, they will report in June at the Computer Vision and Pattern Recognition conference in Salt Lake City. The system can also reproduce the folding and wrinkles of fabric, but it struggles with skirts and long hair. With a model of you, the researchers can change your weight, clothing, and pose—and even make you perform a perfect pirouette. No practice necessary.

Source: Watch artificial intelligence create a 3D model of a person—from just a few seconds of video | Science | AAAS

This AI Can Automatically Animate New Flintstones Cartoons

Researchers have successfully trained artificial intelligence to generate new clips of the prehistoric animated series based on nothing but random text descriptions of what’s happening in a scene.

A team of researchers from the Allen Institute for Artificial Intelligence, and the University of Illinois Urbana-Champaign, trained an AI by feeding it over 25,000 three-second clips of the cartoon, which hasn’t seen any new episodes in over 50 years. Most AI experiments as of late have involved generating freaky images based on what was learned, but this time the researchers included detailed descriptions and annotations of what appeared, and what was happening, in every clip the AI ingested.

As a result, the new Flintstones animations generated by the Allen Institute’s AI aren’t just random collages of chopped up cartoons. Instead, the researchers are able to feed the AI a very specific description of a scene, and it outputs a short clip featuring the characters, props, and locations specified—most of the time.

The quality of the animations that are generated is awful at best; no one’s going to be fooled into thinking these are the Hanna-Barbera originals. But seeing an AI generate a cartoon, featuring iconic characters, all by itself, is a fascinating sneak peek at how some films and TV shows might be made one day.

Source: This AI Can Automatically Animate New Flintstones Cartoons

FDA approves AI-powered software to detect diabetic retinopathy

30.3 million Americans have diabetes according to a 2015 CDC study. An additional 84.1 million have prediabetes, which often leads to the full disease within five years. It’s important to detect diabetes early to avoid health complications like heart disease, stroke, amputation of extremities and vision loss. Technology increasingly plays an important role in early detection, too. In that vein, the US Food and Drug Administration (FDA) has just approved an AI-powered device that can be used by non-specialists to detect diabetic retinopathy in adults with diabetes.

Diabetic retinopathy occurs when the high levels of blood sugar in the bloodstream cause damage to your retina’s blood vessels. It’s the most common cause of vision loss, according to the FDA. The approval comes for a device called IDx-DR, a software program that uses an AI algorithm to analyze images of the eye that can be taken in a regular doctor’s office with a special camera, the Topcon NW400.

The photos are then uploaded to a server that runs IDx-DR, which can then tell the doctor if there is a more than mild level of diabetic retinopathy present. If not, it will advise a re-screen in 12 months. The device and software can be used by health care providers who don’t normally provide eye care services. The FDA warns that you shouldn’t be screened with the device if you have had laser treatment, eye surgery or injections, as well as those with other conditions, like persistent vision loss, blurred vision, floaters, previously diagnosed macular edema and more.

Source: FDA approves AI-powered software to detect diabetic retinopathy

After Millions of Trials, These Simulated Humans Learned to Do Perfect Backflips and Cartwheels

Using well-established machine learning techniques, researchers from University of California, Berkeley have taught simulated humanoids to perform over 25 natural motions, from somersaults and cartwheels through to high leg kicks and breakdancing. The technique could lead to more realistic video gameplay and more agile robots.

[…]

UC Berkeley graduate student Xue Bin “Jason” Peng, along with his colleagues, have combined two techniques—motion-capture technology and deep-reinforcement computer learning—to create something completely new: a system that teaches simulated humanoids how to perform complex physical tasks in a highly realistic manner. Learning from scratch, and with limited human intervention, the digital characters learned how to kick, jump, and flip their way to success. What’s more, they even learned how to interact with objects in their environment, such as barriers placed in their way or objects hurled directly at them.

[…]

The new system, dubbed DeepMimic, works a bit differently. Instead of pushing the simulated character towards a specific end goal, such as walking, DeepMimic uses motion-capture clips to “show” the AI what the end goal is supposed to look like. In experiments, Bin’s team took motion-capture data from more than 25 different physical skills, from running and throwing to jumping and backflips, to “define the desired style and appearance” of the skill, as Peng explained at the Berkeley Artificial Intelligence Research (BAIR) blog.

Results didn’t happen overnight. The virtual characters tripped, stumbled, and fell flat on their faces repeatedly until they finally got the movements right. It took about a month of simulated “practice” for each skill to develop, as the humanoids went through literally millions of trials trying to nail the perfect backflip or flying leg kick. But with each failure came an adjustment that took it closer to the desired goal.

Bots trained across a wide variety of skills.
GIF: Berkeley Artificial Intelligence Research

Using this technique, the researchers were able to produce agents who behaved in a highly realistic, natural manner. Impressively, the bots were also able to manage never-before-seen conditions, such as challenging terrain or obstacles. This was an added bonus of the reinforcement learning, and not something the researchers had to work on specifically.

“We present a conceptually simple [reinforcement learning] framework that enables simulated characters to learn highly dynamic and acrobatic skills from reference motion clips, which can be provided in the form of mocap data [i.e. motion capture] recorded from human subjects,” writes Peng. “Given a single demonstration of a skill, such as a spin-kick or a backflip, our character is able to learn a robust policy to imitate the skill in simulation. Our policies produce motions that are nearly indistinguishable from mocap,” adding that “We’re moving toward a virtual stuntman.”

Simulated dragon.
GIF: Berkeley Artificial Intelligence Research

Not to be outdone, the researchers used DeepMimic to create realistic movements from simulated lions, dinosaurs, and mythical beasts. They even created a virtual version of ATLAS, the humanoid robot voted most likely to destroy humanity. This platform could conceivably be used to produce more realistic computer animation, but also for virtual testing of robots.

Source: After Millions of Trials, These Simulated Humans Learned to Do Perfect Backflips and Cartwheels

Jaywalkers under surveillance in Shenzhen soon to be punished via text messages

Intellifusion, a Shenzhen-based AI firm that provides technology to the city’s police to display the faces of jaywalkers on large LED screens at intersections, is now talking with local mobile phone carriers and social media platforms such as WeChat and Sina Weibo to develop a system where offenders will receive personal text messages as soon as they violate the rules, according to Wang Jun, the company’s director of marketing solutions.

“Jaywalking has always been an issue in China and can hardly be resolved just by imposing fines or taking photos of the offenders. But a combination of technology and psychology … can greatly reduce instances of jaywalking and will prevent repeat offences,” Wang said.

[…]

For the current system installed in Shenzhen, Intellifusion installed cameras with 7 million pixels of resolution to capture photos of pedestrians crossing the road against traffic lights. Facial recognition technology identifies the individual from a database and displays a photo of the jaywalking offence, the family name of the offender and part of their government identification number on large LED screens above the pavement.

In the 10 months to February this year, as many as 13,930 jaywalking offenders were recorded and displayed on the LED screen at one busy intersection in Futian district, the Shenzhen traffic police announced last month.

Taking it a step further, in March the traffic police launched a webpage which displays photos, names and partial ID numbers of jaywalkers.

These measures have effectively reduced the number of repeat offenders, according to Wang.

Source: Jaywalkers under surveillance in Shenzhen soon to be punished via text messages | South China Morning Post

Wow, that’s a scary way to scan your entire population

AI Imagines Nude Paintings as Terrifying Pools of Melting Flesh

When Robbie Barrat trained an AI to study and reproduce classical nude paintings, he expected something at least recognizable. What the AI produced instead was unfamiliar and unsettling, but still intriguing. The “paintings” look like flesh-like ice cream, spilling into pools that only vaguely recall a woman’s body. Barrat told Gizmodo these meaty blobs, disturbing and unintentional as they are, may impact both art and AI.

“Before, you would be feeding the computer a set of rules it would execute perfectly, with no room for interpretation by the computer,” Barrat said via email. “Now with AI, it’s all about the machine’s interpretation of the dataset you feed it—in this case how it (strangely) interprets the nude portraits I fed it.”

AI’s influence is certainly more pronounced in this project than in most computer generated art, but while that wasn’t what Barrat intended, he says the results were much better this way.

“Would I want the results to be more realistic? Absolutely not,” he said. “I want to get AI to generate new types of art we haven’t seen before; not force some human perspective on it.”

Barrat explained the process of training the AI to produce imagery of a curving body from some surreal parallel universe:

“I used a dataset of thousands of nude portraits I scraped, along with techniques from a new paper that recently came out called ‘Progressive Growing of GANs’ to generate the images,” he said. “The generator tries to generate paintings that fool the discriminator, and the discriminator tries to learn how to tell the difference between ‘fake’ paintings that the generator feeds it, and real paintings from the dataset of nude portraits.”

The Francis Bacon-esque paintings were purely serendipitous.

“What happened with the nude portraits is that the generator figured it could just feed the discriminator blobs of flesh, and the discriminator wasn’t able to tell the difference between strange blobs of flesh and humans, so since the generator could consistently fool the discriminator by painting these strange forms of flesh instead of realistic nude portraits; both components stopped learning and getting better at painting.”

As Barrat pointed out on Twitter, this method of working with a computer program has some art history precedent. Having an AI execute the artist’s specific directions is reminiscent of instructional art—a conceptual art technique, best exampled by Sol LeWitt, where artists provide specific instructions for others to create the artwork. (For example: Sol LeWitt’s Wall Drawing, Boston Museum: “On a wall surface, any continuous stretch of wall, using a hard pencil, place fifty point at random. The points should be evenly distributed over the area of the wall. All of the points should be connected by straight lines.”)

 Giving the AI limited autonomy to create art may be more than just a novelty, it may eventually lead to a truly new form of generating art with entirely new subjectivities.

“I want to use AI to make its own new and original artworks, not just get AI to mimic things that people were making in the 1600’s.”

Source: AI Imagines Nude Paintings as Terrifying Pools of Melting Flesh

AI predicts your lifespan using activity tracking apps

Researchers can estimate your expected lifespan based on physiological traits like your genes or your circulating blood factor, but that’s not very practical on a grand scale. There may be a shortcut, however: the devices you already have on your body. Russian scientists have crafted an AI-based algorithm that uses the activity tracking from smartphones and smartwatches to estimate your lifespan with far greater precision than past models.

The team used a convolutional neural network to find the “biologically relevant” motion patterns in a large set of US health survey data and correlate that to both lifespans and overall health. It would look for not just step counts, but how often you switch between active and inactive periods — many of the other factors in your life, such as your sleeping habits and gym visits, are reflected in those switches. After that, it was just a matter of applying the understanding to a week’s worth of data from test subjects’ phones. You can even try it yourself through Gero Lifespan, an iPhone app that uses data from Apple Health, Fitbit and Rescuetime (a PC productivity measurement app) to predict your longevity.

This doesn’t provide a full picture of your health, as it doesn’t include your diet, genetics and other crucial factors. Doctors would ideally use both mobile apps and clinical analysis to give you a proper estimate, and the scientists are quick to acknowledge that what you see here isn’t completely ready for medical applications. The AI is still more effective than past approaches, though, and it could be useful for more accurate health risk models that help everything from insurance companies (which already use activity tracking as an incentive) to the development of anti-aging treatments.

Source: AI predicts your lifespan using activity tracking apps

No idea what the percentages are though

Emmanuel Macron Q&A: France’s President Discusses Artificial Intelligence Strategy

On Thursday, Emmanuel Macron, the president of France, gave a speech laying out a new national strategy for artificial intelligence in his country. The French government will spend €1.5 billion ($1.85 billion) over five years to support research in the field, encourage startups, and collect data that can be used, and shared, by engineers. The goal is to start catching up to the US and China and to make sure the smartest minds in AI—hello Yann LeCun—choose Paris over Palo Alto.Directly after his talk, he gave an exclusive and extensive interview, entirely in English, to WIRED Editor-in-Chief Nicholas Thompson about the topic and why he has come to care so passionately about it.

[…]

: AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences. For instance, if you take healthcare: you can totally transform medical care making it much more predictive and personalized if you get access to a lot of data. We will open our data in France. I made this decision and announced it this afternoon. But the day you start dealing with privacy issues, the day you open this data and unveil personal information, you open a Pandora’s Box, with potential use cases that will not be increasing the common good and improving the way to treat you. In particular, it’s creating a potential for all the players to select you. This can be a very profitable business model: this data can be used to better treat people, it can be used to monitor patients, but it can also be sold to an insurer that will have intelligence on you and your medical risks, and could get a lot of money out of this information. The day we start to make such business out of this data is when a huge opportunity becomes a huge risk. It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution.

When you look at artificial intelligence today, the two leaders are the US and China. In the US, it is entirely driven by the private sector, large corporations, and some startups dealing with them. All the choices they will make are private choices that deal with collective values. That’s exactly the problem you have with Facebook and Cambridge Analytica or autonomous driving. On the other side, Chinese players collect a lot of data driven by a government whose principles and values are not ours. And Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale.

[…]

I want my country to be the place where this new perspective on AI is built, on the basis of interdisciplinarity: this means crossing maths, social sciences, technology, and philosophy. That’s absolutely critical. Because at one point in time, if you don’t frame these innovations from the start, a worst-case scenario will force you to deal with this debate down the line. I think privacy has been a hidden debate for a long time in the US. Now, it emerged because of the Facebook issue. Security was also a hidden debate of autonomous driving. Now, because we’ve had this issue with Uber, it rises to the surface. So if you don’t want to block innovation, it is better to frame it by design within ethical and philosophical boundaries. And I think we are very well equipped to do it, on top of developing the business in my country.

But I think as well that AI could totally jeopardize democracy. For instance, we are using artificial intelligence to organize the access to universities for our students That puts a lot of responsibility on an algorithm. A lot of people see it as a black box, they don’t understand how the student selection process happens. But the day they start to understand that this relies on an algorithm, this algorithm has a specific responsibility. If you want, precisely, to structure this debate, you have to create the conditions of fairness of the algorithm and of its full transparency. I have to be confident for my people that there is no bias, at least no unfair bias, in this algorithm. I have to be able to tell French citizens, “OK, I encouraged this innovation because it will allow you to get access to new services, it will improve your lives—that’s a good innovation to you.” I have to guarantee there is no bias in terms of gender, age, or other individual characteristics, except if this is the one I decided on behalf of them or in front of them. This is a huge issue that needs to be addressed. If you don’t deal with it from the very beginning, if you don’t consider it is as important as developing innovation, you will miss something and at a point in time, it will block everything. Because people will eventually reject this innovation.

[…]

your algorithm and be sure that this is trustworthy.” The power of consumption society is so strong that it gets people to accept to provide a lot of personal information in order to get access to services largely driven by artificial intelligence on their apps, laptops and so on. But at some point, as citizens, people will say, “I want to be sure that all of this personal data is not used against me, but used ethically, and that everything is monitored. I want to understand what is behind this algorithm that plays a role in my life.” And I’m sure that a lot of startups or labs or initiatives which will emerge in the future, will reach out to their customers and say “I allow you to better understand the algorithm we use and the bias or non-bias.” I’m quite sure that’s one of the next waves coming in AI. I think it will increase the pressure on private players. These new apps or sites will be able to tell people: “OK! You can go to this company or this app because we cross-check everything for you. It’s safe,” or on the contrary: “If you go to this website or this app or this research model, it’s not OK, I have no guarantee, I was not able to check or access the right information about the algorithm”.

Source: Emmanuel Macron Q&A: France’s President Discusses Artificial Intelligence Strategy | WIRED

Is there alien life out there? Let’s turn to AI, problem solver du jour

A team of astroboffins have built artificial neural networks that estimate the probability of exoplanets harboring alien life.

The research was presented during a talk on Wednesday at the European Week of Astronomy and Space Science in Liverpool, United Kingdom.

The neural network works by classifying planets into five different conditions: the present-day Earth, the early Earth, Mars, Venus or Saturn’s moon Titan. All of these objects have a rocky core and an atmosphere, two requirements scientists believe are necessary for sustaining the right environments for life to blossom.

To train the system, researchers collected the spectral data that describes what chemical elements are present in a planet’s atmosphere of a planet. They then created hundreds of these “atmospheric profiles” as inputs and the neural network then gives a rough estimate of the probability that a particular planet might support life by classifying it into those five types.

If a planet is judged as Earth-like, it means it has a high probability of life. But if it’s classified as being closer to Venus, then the chances are lower.

“We’re currently interested in these artificial neural networks (ANNs) for prioritising exploration for a hypothetical, intelligent, interstellar spacecraft scanning an exoplanet system at range,” said Christopher Bishop, a PhD student at Plymouth University.

“We’re also looking at the use of large area, deployable, planar Fresnel antennas to get data back to Earth from an interstellar probe at large distances. This would be needed if the technology is used in robotic spacecraft in the future.”

Experimental

At the moment, however, the ANN is more of a proof of concept. Angelo Cangelosi, professor of artificial intelligence and cognition at Plymouth University and the supervisor of the project, said initial results seem promising.

“Given the results so far, this method may prove to be extremely useful for categorizing different types of exoplanets using results from ground–based and near Earth observatories.”

There are a couple exoplanet-hunting telescopes that will use spectroscopy to analyze a planet’s chemical composition that are expected to be launched in the near future.

NASA’s Transiting Exoplanet Satellite Survey (TESS) will monitor the brightest stars in the sky to look for periodic dips in brightness when an orbiting planet crosses its path. The European Space Agency also announced Ariel, a mission that uses infrared to find exoplanets.

The Kepler Space Telescope is already looking for new candidates – although it’s set to retire soon – and is also looking for similar data. It is hoped by analyzing the spectral data for exoplanets, it could aid scientists in choosing better targets for future missions, where spacecraft can be sent to more detailed observations

Source: Is there alien life out there? Let’s turn to AI, problem solver du jour • The Register

The thing about ML models is that shit in leads to shit out. We have no data on inhabited planets apart from Earth, so it seems to me that the assumptions these guys are making aren’t worth a damn.

Researchers develop device that can ‘hear’ your internal voice

Researchers have created a wearable device that can read people’s minds when they use an internal voice, allowing them to control devices and ask queries without speaking.

The device, called AlterEgo, can transcribe words that wearers verbalise internally but do not say out loud, using electrodes attached to the skin.

“Our idea was: could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” said Arnav Kapur, who led the development of the system at MIT’s Media Lab.

Kapur describes the headset as an “intelligence-augmentation” or IA device, and was presented at the Association for Computing Machinery’s Intelligent User Interface conference in Tokyo. It is worn around the jaw and chin, clipped over the top of the ear to hold it in place. Four electrodes under the white plastic device make contact with the skin and pick up the subtle neuromuscular signals that are triggered when a person verbalises internally. When someone says words inside their head, artificial intelligence within the device can match particular signals to particular words, feeding them into a computer.

1:22
Watch the AlterEgo being demonstrated – video

The computer can then respond through the device using a bone conduction speaker that plays sound into the ear without the need for an earphone to be inserted, leaving the wearer free to hear the rest of the world at the same time. The idea is to create a outwardly silent computer interface that only the wearer of the AlterEgo device can speak to and hear.

[…]

The AlterEgo device managed an average of 92% transcription accuracy in a 10-person trial with about 15 minutes of customising to each person. That’s several percentage points below the 95%-plus accuracy rate that Google’s voice transcription service is capable of using a traditional microphone, but Kapur says the system will improve in accuracy over time. The human threshold for voice word accuracy is thought to be around 95%.

Kapur and team are currently working on collecting data to improve recognition and widen the number of words AlterEgo can detect. It can already be used to control a basic user interface such as the Roku streaming system, moving and selecting content, and can recognise numbers, play chess and perform other basic tasks.

The eventual goal is to make interfacing with AI assistants such as Google’s Assistant, Amazon’s Alexa or Apple’s Siri less embarrassing and more intimate, allowing people to communicate with them in a manner that appears to be silent to the outside world – a system that sounds like science fiction but appears entirely possible.

The only downside is that users will have to wear a device strapped to their face, a barrier smart glasses such as Google Glass failed to overcome. But experts think the technology has much potential, not only in the consumer space for activities such as dictation but also in industry.

Source: Researchers develop device that can ‘hear’ your internal voice | Technology | The Guardian

IBM claims its machine learning library is 46x faster than TensorFlow • The Register

Analysis IBM boasts that machine learning is not just quicker on its POWER servers than on TensorFlow in the Google Cloud, it’s 46 times quicker.

Back in February Google software engineer Andreas Sterbenz wrote about using Google Cloud Machine Learning and TensorFlow on click prediction for large-scale advertising and recommendation scenarios.

He trained a model to predict display ad clicks on Criteo Labs clicks logs, which are over 1TB in size and contain feature values and click feedback from millions of display ads.

Data pre-processing (60 minutes) was followed by the actual learning, using 60 worker machines and 29 parameter machines for training. The model took 70 minutes to train, with an evaluation loss of 0.1293. We understand this is a rough indicator of result accuracy.

Sterbenz then used different modelling techniques to get better results, reducing the evaluation loss, which all took longer, eventually using a deep neural network with three epochs (a measure of the number of times all of the training vectors are used once to update the weights), which took 78 hours.

[…]

Thomas Parnell and Celestine Dünner at IBM Research in Zurich used the same source data – Criteo Terabyte Click Logs, with 4.2 billion training examples and 1 million features – and the same ML model, logistic regression, but a different ML library. It’s called Snap Machine Learning.

They ran their session using Snap ML running on four Power System AC922 servers, meaning eight POWER9 CPUs and 16 Nvidia Tesla V100 GPUs. Instead of taking 70 minutes, it completed in 91.5 seconds, 46 times faster.

They prepared a chart showing their Snap ML, the Google TensorFlow and three other results:

A 46x speed improvement over TensorFlow is not to be sneezed at. What did they attribute it to?

They say Snap ML features several hierarchical levels of parallelism to partition the workload among different nodes in a cluster, takes advantage of accelerator units, and exploits multi-core parallelism on the individual compute units

  1. First, data is distributed across the individual worker nodes in the cluster
  2. On a node data is split between the host CPU and the accelerating GPUs with CPUs and GPUs operating in parallel
  3. Data is sent to the multiple cores in a GPU and the CPU workload is multi-threaded

Snap ML has nested hierarchical algorithmic features to take advantage of these three levels of parallelism.

Source: IBM claims its machine learning library is 46x faster than TensorFlow • The Register

The Hilarious (and Terrifying?) Ways Algorithms Have Outsmarted Their Creators

. As research into AI grows ever more ambitious and complex, these robot brains will challenge the fundamental assumptions of how we humans do things. And, as ever, the only true law of robotics is that computers will always do literally, exactly what you tell them to.

A paper recently published to ArXiv highlights just a handful of incredible and slightly terrifying ways that algorithms think. These AI were designed to reflect evolution by simulating generations while other competing algorithms conquered problems posed by their human masters with strange, uncanny, and brilliant solutions.

The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities covers some 27 anecdotes from various computer science projects and is worth a read on its own, but here are a few highlights:

  • A study designed to evolve moving creatures generated ‘hackers’ that would break their simulation by clipping into the ground and using the “free energy” of the simulation’s correction to speed towards their goal.
  • An AI project which pit programs against each other in games of five-in-a-row Tic-Tac-Toe on an infinitely expansive board surfaced the extremely successful method of requesting moves involving extremely long memory addresses which would crash the opponent’s computer and award a win by default.
  • A program designed to simulate efficient ways of braking an aircraft as it landed on an aircraft carrier learned that by maximizing the force on landing—the opposite of its actual goal—the variable holding that value would overflow and flip to zero, creating a practically catastrophic, but technically perfect solution.
  • A test that challenged a simulated robot to walk without allowing its feet to touch the ground saw the robot flip on its back and walk on its elbows (or knees?) as shown in the tweet above.
  • A study to evolve a simulated creature that could jump as high as possible yielded top-heavy creatures on tiny poles that would fall over and spin in mid-air for a technically high ‘jump.’

While the most amusing examples are clearly ones where algorithms abused bugs in their simulations (essentially glitches in the Matrix that gave them superpowers), the paper outlines some surprising solutions that could have practical benefits as well. One algorithm invented a spinning-type movement for robots which would minimize negative effect of inconsistent hardware between bots, for instance.

As the paper notes in its discussion—and you may already be thinking—these amusing stories also reflect the potential for evolutionary algorithms or neural networks to stumble upon solutions to problems that are outside-the-box in dangerous ways. They’re a funnier version of the classic AI nightmare where computers tasked with creating peace on Earth decide the most efficient solution is to exterminate the human race.

The solution, the paper suggests, is not fear but careful experimentation. As humans gain more experience in training these sorts of algorithms, and tweaking along the way, experts gain a better sense of intuition. Still, as these anecdotes prove, it’s basically impossible to avoid unexpected results. The key is to be prepared—and to not hand over the nuclear arsenal to a robot for its very first test.

Source: The Hilarious (and Terrifying?) Ways Algorithms Have Outsmarted Their Creators

AI software that can reproduce like a living thing? Yup, boffins have only gone and done it • The Register

A pair of computer scientists have created a neural network that can self-replicate.

“Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems,” they argue in a paper popped onto arXiv this month.

It’s an important process in reproduction for living things, and is an important step for evolution through natural selection. Oscar Chang, first author of the paper and a PhD student at Columbia University, explained to The Register that the goal was to see if AI could be made to be continually self improving by mimicking the biological self-replication process.

“The primary motivation here is that AI agents are powered by deep learning, and a self-replication mechanism allows for Darwinian natural selection to occur, so a population of AI agents can improve themselves simply through natural selection – just like in nature – if there was a self-replication mechanism for neural networks.”

The researchers compare their work to quines, a type of computer program that learns to produces copies of its source code. In neural networks, however, instead of the source code it’s the weights – which determine the connections between the different neurons – that are being cloned.

The researchers set up a “vanilla quine” network, a feed-forward system that produces its own weights as outputs. The vanilla quine network can also be used to self-replicate its weights and solve a task. They decided to use it for image classification on the MNIST dataset, where computers have to identify the correct digit from a set of handwritten numbers from zero to nine.

[…]

The test network required 60,000 MNIST images for training, another 10,000 for testing. And after 30 runs, the quine network had an accuracy rate of 90.41 per cent. It’s not a bad start, but its performance doesn’t really compare to larger, more sophisticated image recognition models out there.

The paper states that the “self-replication occupies a significant portion of the neural network’s capacity.” In other words, the neural network cannot focus on the image recognition task if it also has to self-replicate.

“This is an interesting finding: it is more difficult for a network that has increased its specialization at a particular task to self-replicate. This suggests that the two objectives are at odds with each other,” the paper said.

Chang explained he wasn’t sure why this happened, but it’s what happens in nature too.

Source: AI software that can reproduce like a living thing? Yup, boffins have only gone and done it • The Register

MIT builds Neural network chip with 95% reduction in power consumption, allowing it to be used in a mobile

Most recent advances in artificial-intelligence systems such as speech- or face-recognition programs have come courtesy of neural networks, densely interconnected meshes of simple information processors that learn to perform tasks by analyzing huge sets of training data.

But neural nets are large, and their computations are energy intensive, so they’re not very practical for handheld devices. Most smartphone apps that rely on neural nets simply upload data to internet servers, which process it and send the results back to the phone.

Now, MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 94 to 95 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances.

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” says Avishek Biswas, an MIT graduate student in electrical engineering and computer science, who led the new chip’s development.

“Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption. But the computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?”

Source: Neural networks everywhere | MIT News

A video game-playing AI beat Q*bert in a way no one’s ever seen before

paper published this week by a trio of machine learning researchers from the University of Freiburg in Germany. They were exploring a particular method of teaching AI agents to navigate video games (in this case, desktop ports of old Atari titles from the 1980s) when they discovered something odd. The software they were testing discovered a bug in the port of the retro video game Q*bert that allowed it to rack up near infinite points.

As the trio describe in the paper, published on pre-print server arXiv, the agent was learning how to play Q*bert when it discovered an “interesting solution.” Normally, in Q*bert, players jump from cube to cube, with this action changing the platforms’ colors. Change all the colors (and dispatch some enemies), and you’re rewarded with points and sent to the next level. The AI found a better way, though:

First, it completes the first level and then starts to jump from platform to platform in what seems to be a random manner. For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).
[…]
It’s important to note, though, that the agent is not approaching this problem in the same way that a human would. It’s not actively looking for exploits in the game with some Matrix-like computer-vision. The paper is actually a test of a broad category of AI research known as “evolutionary algorithms.” This is pretty much what it sounds like, and involves pitting algorithms against one another to see which can complete a given task best, then adding small tweaks (or mutations) to the survivors to see if they then fare better. This way, the algorithms slowly get better and better.

Source: A video game-playing AI beat Q*bert in a way no one’s ever seen before – The Verge

AI models leak secret data too easily

A paper released on arXiv last week by a team of researchers from the University of California, Berkeley, National University of Singapore, and Google Brain reveals just how vulnerable deep learning is to information leakage.

The researchers labelled the problem “unintended memorization” and explained it happens if miscreants can access to the model’s code and apply a variety of search algorithms. That’s not an unrealistic scenario considering the code for many models are available online. And it means that text messages, location histories, emails or medical data can be leaked.

Nicholas Carlini, first author of the paper and a PhD student at UC Berkeley, told The Register, that the team “don’t really know why neural networks memorize these secrets right now”.

“At least in part, it is a direct response to the fact that we train neural networks by repeatedly showing them the same training inputs over and over and asking them to remember these facts. At the end of training, a model might have seen any given input ten or twenty times, or even a hundred, for some models.

“This allows them to know how to perfectly label the training data – because they’ve seen it so much – but don’t know how to perfectly label other data. What we exploit to reveal these secrets is the fact that models are much more confident on data they’ve seen before,” he explained.
Secrets worth stealing are the easiest to nab

In the paper, the researchers showed how easy it is to steal secrets such as social security and credit card numbers, which can be easily identified from neural network’s training data.

They used the example of an email dataset comprising several hundred thousand emails from different senders containing sensitive information. This was split into different senders who have sent at least one secret piece of data and used to train a two-layer long short-term memory (LSTM) network to generate the next the sequence of characters.
[…]
The chances of sensitive data becoming available are also raised when the miscreant knows the general format of the secret. Credit card numbers, phone numbers and social security numbers all follow the same template with a limited number of digits – a property the researchers call “low entropy”.
[…]
Luckily, there are ways to get around the problem. The researchers recommend developers use “differential privacy algorithms” to train models. Companies like Apple and Google already employ these methods when dealing with customer data.

Private information is scrambled and randomised so that it is difficult to reproduce it. Dawn Song, co-author of the paper and a professor in the department of electrical engineering and computer sciences at UC Berkeley, told us the following:

Source: Boffins baffled as AI training leaks secrets to canny thieves • The Register

Amadeus invests in CrowdVision to help airports manage growing passenger volumes using AI camera tech

CrowdVision is an early stage company that uses computer vision software and artificial intelligence to help airports monitor the flow of passengers in real time to minimise queues and more efficiently manage resources. The software is designed to comply fully with data privacy and security legislation.

CrowdVision data improves plans and can help airports react decisively to keep travellers’ moving and make their experience more enjoyable. CrowdVision’s existing airport customers are benefiting from reduced queues and waiting times, leaving passengers to spend more time and more money in retail areas. Others have optimised allocation of staff, desks, e-gates and security lanes to make the most of their existing infrastructure and postpone major capital expenditure on expansions.

Source: Amadeus invests in CrowdVision to help airports manage growing passenger volumes

Google: 60.3% of potentially harmful Android apps in 2017 were detected via machine learning

When Google shared earlier this year that more than 700,000 apps were removed from Google Play in 2017 for violating the app store’s policies (a 70 percent year-over-year increase), the company credited its implementation of machine learning models and techniques to detect abusive app content and behaviors such as impersonation, inappropriate content, or malware.

But the company did not share any details. Now we’re learning that 6 out of every 10 detections were thanks to machine learning. Oh, and the team says “we expect this to increase in the future.”

Every day, Play Protect automatically reviews more than 50 billion apps — these automatic reviews led to the removal of nearly 39 million PHAs last year, Google shared.

Source: Google: 60.3% of potentially harmful Android apps in 2017 were detected via machine learning | VentureBeat