Memory Transferred between Snails using RNA, Challenging Standard Theory of How the Brain Remembers

UCLA neuroscientists reported Monday that they have transferred a memory from one animal to another via injections of RNA, a startling result that challenges the widely held view of where and how memories are stored in the brain.

The finding from the lab of David Glanzman hints at the potential for new RNA-based treatments to one day restore lost memories and, if correct, could shake up the field of memory and learning.

[…]

Many scientists are expected to view the research more cautiously. The work is in snails, animals that have proven a powerful model organism for neuroscience but whose simple brains work far differently than those of humans. The experiments will need to be replicated, including in animals with more complex brains. And the results fly in the face of a massive amount of evidence supporting the deeply entrenched idea that memories are stored through changes in the strength of connections, or synapses, between neurons.

[…]

Glanzman’s experiments—funded by the National Institutes of Health and the National Science Foundation—involved giving mild electrical shocks to the marine snail Aplysia californica. Shocked snails learn to withdraw their delicate siphons and gills for nearly a minute as a defense when they subsequently receive a weak touch; snails that have not been shocked withdraw only briefly.

The researchers extracted RNA from the nervous systems of snails that had been shocked and injected the material into unshocked snails. RNA’s primary role is to serve as a messenger inside cells, carrying protein-making instructions from its cousin DNA. But when this RNA was injected, these naive snails withdrew their siphons for extended periods of time after a soft touch. Control snails that received injections of RNA from snails that had not received shocks did not withdraw their siphons for as long.

“It’s as if we transferred a memory,” Glanzman said.

Glanzman’s group went further, showing that Aplysia sensory neurons in Petri dishes were more excitable, as they tend to be after being shocked, if they were exposed to RNA from shocked snails. Exposure to RNA from snails that had never been shocked did not cause the cells to become more excitable.

The results, said Glanzman, suggest that memories may be stored within the nucleus of neurons, where RNA is synthesized and can act on DNA to turn genes on and off. He said he thought memory storage involved these epigenetic changes—changes in the activity of genes and not in the DNA sequences that make up those genes—that are mediated by RNA.

This view challenges the widely held notion that memories are stored by enhancing synaptic connections between neurons. Rather, Glanzman sees synaptic changes that occur during memory formation as flowing from the information that the RNA is carrying.

Source: Memory Transferred between Snails, Challenging Standard Theory of How the Brain Remembers – Scientific American

Teensafe spying app leaked thousands of user passwords

At least one server used by an app for parents to monitor their teenagers’ phone activity has leaked tens of thousands of accounts of both parents and children.

The mobile app, TeenSafe, bills itself as a “secure” monitoring app for iOS and Android, which lets parents view their child’s text messages and location, monitor who they’re calling and when, access their web browsing history, and find out which apps they have installed.

Although teen monitoring apps are controversial and privacy-invasive, the company says it doesn’t require parents to obtain the consent of their children.

But the Los Angeles, Calif.-based company left its servers, hosted on Amazon’s cloud, unprotected and accessible by anyone without a password.

Source: Teen phone monitoring app leaked thousands of user passwords | ZDNet

Which basically means that other than nasty parents spying in on their children, anyone else was doing so also.

Google Removes ‘Don’t Be Evil’ Clause From Its Code of Conduct

Google’s unofficial motto has long been the simple phrase “don’t be evil.” But that’s over, according to the code of conduct that Google distributes to its employees. The phrase was removed sometime in late April or early May, archives hosted by the Wayback Machine show.

“Don’t be evil” has been part of the company’s corporate code of conduct since 2000. When Google was reorganized under a new parent company, Alphabet, in 2015, Alphabet assumed a slightly adjusted version of the motto, “do the right thing.” However, Google retained its original “don’t be evil” language until the past several weeks. The phrase has been deeply incorporated into Google’s company culture—so much so that a version of the phrase has served as the wifi password on the shuttles that Google uses to ferry its employees to its Mountain View headquarters, sources told Gizmodo.

[…]

Despite this significant change, Google’s code of conduct says it has not been updated since April 5, 2018.

The updated version of Google’s code of conduct still retains one reference to the company’s unofficial motto—the final line of the document is still: “And remember… don’t be evil, and if you see something that you think isn’t right – speak up!”

Source: Google Removes ‘Don’t Be Evil’ Clause From Its Code of Conduct

Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site

LocationSmart, a U.S. based company that acts as an aggregator of real-time data about the precise location of mobile phone devices, has been leaking this information to anyone via a buggy component of its Web site — without the need for any password or other form of authentication or authorization — KrebsOnSecurity has learned. The company took the vulnerable service offline early this afternoon after being contacted by KrebsOnSecurity, which verified that it could be used to reveal the location of any AT&T, Sprint, T-Mobile or Verizon phone in the United States to an accuracy of within a few hundred yards.

Source: Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site — Krebs on Security

Scarily this means it can still be used to track anyone if you’re willing to pay for the service.

Seriously, Cisco? Another hard-coded password? Sheesh

Cisco’s issued 16 patches, the silliest of which is CVE-2018-0222 because it’s a hard-coded password in Switchzilla’s Digital Network Architecture (DNA) Center.

“The vulnerability is due to the presence of undocumented, static user credentials for the default administrative account for the affected software,” Cisco’s admitted.

As you’d expect, “An attacker could exploit this vulnerability by using the account to log in to an affected system. A successful exploit could allow the attacker to log in to the affected system and execute arbitrary commands with root privileges.”

Oh great.

Cisco’s been here before, with its Aironet software. And who could forget the time Cisco set the wrong default password on UCS servers? Such good times.

The company’s also reported a critical vulnerability in the way the same product runs Kubernetes and a nasty flaw in its network function virtualization infrastructure.

Source: Seriously, Cisco? Another hard-coded password? Sheesh • The Register

Entire Nest ecosystem of smart home devices goes offline

For at least a few hours overnight, owners of Nest products were unable to access their devices via the Nest app or web browsers, according to Nest Support on Twitter. Other devices like Nest Secure and Nest x Yale Locks behaved erratically. The as of yet unexplained issues affected the entire lineup of Nest devices, including thermostats, locks, cameras, doorbells, smoke detectors, and alarms. Importantly, the devices remained (mostly) operational, they just weren’t accessible by any means other than physical controls. You know, just like the plain old dumb devices these more expensive and more cumbersome smart devices replaced.

While not catastrophic (locks still worked, for example), it’s a reminder just how precarious life can be with internet-connected devices, especially when you go all-in on an ecosystem. As of 12:30AM ET, Nest says it’s working to bring all devices back online and restoring full arm / disarm and lock / unlock functionality to Nest Secure and Nest x Yale Locks.

Source: Entire Nest ecosystem of smart home devices goes offline  – The Verge

The dangers of centralised cloud based services

New Artificial Intelligence Beats Tactical Experts in Aerial Combat Simulation

ALPHA is currently viewed as a research tool for manned and unmanned teaming in a simulation environment. In its earliest iterations, ALPHA consistently outperformed a baseline computer program previously used by the Air Force Research Lab for research.  In other words, it defeated other AI opponents.

In fact, it was only after early iterations of ALPHA bested other computer program opponents that Lee then took to manual controls against a more mature version of ALPHA last October. Not only was Lee not able to score a kill against ALPHA after repeated attempts, he was shot out of the air every time during protracted engagements in the simulator.

Since that first human vs. ALPHA encounter in the simulator, this AI has repeatedly bested other experts as well, and is even able to win out against these human experts when its (the ALPHA-controlled) aircraft are deliberately handicapped in terms of speed, turning, missile capability and sensors.

Lee, who has been flying in simulators against AI opponents since the early 1980s, said of that first encounter against ALPHA, “I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed.”

He added that with most AIs, “an experienced pilot can beat up on it (the AI) if you know what you’re doing. Sure, you might have gotten shot down once in a while by an AI program when you, as a pilot, were trying something new, but, until now, an AI opponent simply could not keep up with anything like the real pressure and pace of combat-like scenarios.”

[…]

Eventually, ALPHA aims to lessen the likelihood of mistakes since its operations already occur significantly faster than do those of other language-based consumer product programming. In fact, ALPHA can take in the entirety of sensor data, organize it, create a complete mapping of a combat scenario and make or change combat decisions for a flight of four fighter aircraft in less than a millisecond. Basically, the AI is so fast that it could consider and coordinate the best tactical plan and precise responses, within a dynamic environment, over 250 times faster than ALPHA’s human opponents could blink.

[…]

It would normally be expected that an artificial intelligence with the learning and performance capabilities of ALPHA, applicable to incredibly complex problems, would require a super computer in order to operate.

However, ALPHA and its algorithms require no more than the computing power available in a low-budget PC in order to run in real time and quickly react and respond to uncertainty and random events or scenarios.

[…]

To reach its current performance level, ALPHA’s training has occurred on a $500 consumer-grade PC. This training process started with numerous and random versions of ALPHA. These automatically generated versions of ALPHA proved themselves against a manually tuned version of ALPHA. The successful strings of code are then “bred” with each other, favoring the stronger, or highest performance versions. In other words, only the best-performing code is used in subsequent generations. Eventually, one version of ALPHA rises to the top in terms of performance, and that’s the one that is utilized.

[…]

ALPHA is developed by Psibernetix Inc., serving as a contractor to the United States Air Force Research Laboratory.

Support for Ernest’s doctoral research, $200,000 in total, was provided over three years by the Dayton Area Graduate Studies Institute and the U.S. Air Force Research Laboratory.

Source: New Artificial Intelligence Beats Tactical Experts in Combat Simulation, University of Cincinnati

Human-Machine Teaming Joint Concept Note by UK MoD

Joint Concept Note (JCN) 1/18, Human-Machine Teaming articulates the challenges and opportunities that robotic and artificial intelligence (AI) technologies offer, and identifies how we achieve military advantage through human-machine teams. Its purpose is to guide coherent future force development and help frame defence strategy and policy.

The JCN examines:

  • economic and technological trends and the likely impacts of AI and robotic systems on defence
  • potential evolutionary paths that robotic and AI systems may take in conflict
  • the effects of AI and robotics development on conflict across the observe, orient, decide and act (OODA) loop
  • why optimised human-machine teams will be essential to developing military advantage

JCN 1/18 should be read by everyone who needs to understand how AI, robotics and data can change the future character of conflict, for us and our adversaries.

Source: Human-Machine Teaming (JCN 1/18) – GOV.UK

Flagship AI Lab announced as Defence Secretary hosts first meet between British and American defence innovators

As part of the MOD’s commitment to pursue and deliver future capabilities, the Defence Secretary announced the launch of AI Lab – a single flagship for Artificial Intelligence, machine learning and data science in defence based at Dstl in Porton Down. AI Lab will enhance and accelerate the UK’s world-class capability in the application of AI-related technologies to defence and security challenges. Dstl currently delivers more than £20 million of research related to AI and this is forecast to grow significantly.

AI Lab will engage in high-level research on areas from autonomous vehicles to intelligent systems; from countering fake news to using information to deter and de-escalate conflicts; and from enhanced computer network defences to improved decision aids for commanders. AI Lab provides tremendous opportunities to help keep the British public safe from a range of defence and security threats. This new creation will help Dstl contribute more fully to this vital challenge.

Source: Flagship AI Lab announced as Defence Secretary hosts first meet between British and American defence innovators

How to make perfect fried rice (and I mean perfect)

Perfect fried rice

Photo: Kevin Pang
  • 2 slices of bacon, diced
  • 2-3 scallions, sliced thinly on a sharp bias
  • 3-4 cups leftover medium or long-grain rice, such as jasmine (no freshly steamed rice)
  • 3 eggs, well beaten
  • Salt
  • 2 tsp. light soy sauce
  • Toasted sesame oil
Photo: Kevin Pang

Heat a 12-inch non-stick skillet or wok over medium-high heat. Add diced bacon and sauté until crisp and golden. Remove from pan and leave about a tablespoon of rendered bacon fat in the pan. (Any more and your final product may become too greasy.)

Add beaten eggs, swirling to evenly coat the bottom of the pan. When the edges start to ruffle, add the rice evenly on to the eggs. Gently but expeditiously stir them around, breaking the eggs into small pieces. Do not press down on the rice, as you want to keep the fluffy texture. I use chopsticks to do the stirring, which also curbs the impulse to smoosh down with a spatula.

When the rice is warmed through, add bacon back in and stir through. If using the Chinese preserved vegetables add them in now too. Add a small pinch of salt to season.

Season with a teaspoon of soy sauce to start, and take a quick taste. If you like a bit of a deeper flavor add another teaspoon. Remember we are going for a light brown color, not a murky dark shade.

Turn off the heat, add scallions and stir through. Add a drizzle of toasted sesame oil, and stir gently to incorporate. Scoop into bowls and serve immediately.

Source: How to make perfect fried rice (and I mean perfect)

UK Watchdog Calls for Face Recognition Ban Over 90 Percent False-Positive Rate

As face recognition in public places becomes more commonplace, Big Brother Watch is especially concerned with false identification. In May, South Wales Police revealed that its face-recognition software had erroneously flagged thousands of attendees of a soccer game as a match for criminals; 92 percent of the matches were wrong. In a statement to the BBC, Matt Jukes, the chief constable in South Wales, said “we need to use technology when we’ve got tens of thousands of people in those crowds to protect everybody, and we are getting some great results from that.”

If someone is misidentified as a criminal or flagged, police may engage and ask for further identification. Big Brother Watch argues that this amounts to “hidden identity checks” that require people to “prove their identity and thus their innocence.” 110 people were stopped at the event after being flagged, leading to 15 arrests.

Simply walking through a crowd could lead to an identity check, but it doesn’t end there. South Wales reported more than 2,400 “matches” between May 2017 and March 2018, but ultimately made only 15 connecting arrests. The thousands of photos taken, however, are still stored in the system, with the overwhelming majority of people having no idea they even had their photo taken.

Source: UK Watchdog Calls for Face Recognition Ban Over 90 Percent False-Positive Rate

Thieves suck millions out of Mexican banks in transfer heist

Thieves siphoned hundreds of millions of pesos out of Mexican banks, including No. 2 Banorte, by creating phantom orders that wired funds to bogus accounts and promptly withdrew the money, two sources close to the government’s investigation said. Hackers sent hundreds of false orders to move amounts ranging from tens of thousands to hundreds of thousands of pesos from banks including Banorte, to fake accounts in other banks, the sources said, and accomplices then emptied the accounts in cash withdrawals in dozens of branch offices.

One source said the thieves transferred more than 300 million pesos ($15.4 million). Daily newspaper El Financiero said about 400 million pesos had been stolen in the hack, citing an anonymous source.

It was not clear how much of the money transferred was later withdrawn in cash. Some of the attempts to fraudulently transfer funds were blocked, the sources said.

Source: Thieves suck millions out of Mexican banks in transfer heist | Reuters

UPnP joins the ‘just turn it off on consumer devices, already’ club

It’s not particularly difficult, particularly with Shodan to help. The required steps are:

  • Discover targets on Shodan by searching for the rootDesc.xml file (Imperva found 1.3 million devices);
  • Use HTTP to access rootDesc.xml;
  • Modify the victim’s port forwarding rules (the researchers noted that this isn’t supposed to work, since port forwarding should be between internal and external addresses, but “few routers actually bother to verify that a provided ‘internal IP’ is actually internal, and [they abide] by all forwarding rules as a result”.
  • Launch the attack.

That means an attacker can create a port forwarding rule that spoofs a victim’s IP address – so a bunch of ill-secured routers can be sent a DNS request which they’ll try to return to the victim, in the classic redirection DDoS attack.

The port forwarding lets an attacker use “evasive ports”, “enabling them to bypass commonplace scrubbing directives that identify amplification payloads by looking for source port data for blacklisting”, the post explained.

Source: UPnP joins the ‘just turn it off on consumer devices, already’ club • The Register

Boffins build smallest drone to fly itself with AI

A team of computer scientists have built the smallest completely autonomous nano-drone that can control itself without the need for a human guidance.

Although computer vision has improved rapidly thanks to machine learning and AI, it remains difficult to deploy algorithms on devices like drones due to memory, bandwidth and power constraints.

But researchers from ETH Zurich, Switzerland and the University of Bologna, Italy have managed to build a hand-sized drone that can fly autonomously and consumes only about 94 milliWatts (0.094 W) of energy. Their efforts were published in a paper on arXiv earlier this month.

At the heart of it all is DroNet, a convolutional neural network that processes incoming images from a camera at 20 frames per second. It works out the steering angle, so that it can control the direction of the drone, and the probability of a collision, so that it know whether to keep going or stop. Training was conducted using thousands of images taken from bicycles and cars driving along different roads and streets.

[…]

But it suffers from some of the same setbacks as the older model. Since it was trained with images from a single plane, the drone can only move horizontally and cannot fly up or down.

Autonomous drones are desirable because if we’re going to use drones to do things like deliver packages, it would be grand if they could avoid obstacles instead of flying on known-safe routes. Autonomy will also help drones to monitor environments, spy on people and develop swarm intelligence for military use.

Source: Boffins build smallest drone to fly itself with AI • The Register

Square Off: The Magic Chess Board with self moving pieces allows you to play remotely or vs AI

No holograms, no 3D, no AR, no bullshit. Square Off is a chess board where the pieces move themselves, and you can play online or against AI.

Square Off is really something special. There’s no avoiding a smile the first time you see a knight slide out from the back row without banging into any pawns along the way, and there’s a certain smug satisfaction from the AI as it slowly slides your pieces off the board after capturing them.

GIF: Square Off

The board houses a 2200 mAh battery that’s rated to around 50 games, rechargeable via AC adapter. There are two versions of Square Off, the standard $329 “Kingdom” set and the $399 “Grand Kingdom” set. The latter, which I’m playing with as I write this, has:

  • Additional capture space where the opponent’s captured pieces are placed automatically at their designated position
  • Auto Rest of board after current game is over.
  • Comes with Special Edition Premium Rosewood chess set
  • Board size is bigger due to additional capture space but play area is same as Kingdom Set

The Square Off app, which has to remain connected to the board throughout play, is very bare bones at this point, and we’ll update accordingly as upcoming features roll out, including:

  • Chess.com integration
  • Game analyzer
  • Training mode
  • Pro game live “streaming” and match recording
  • Chat

While the whole package feels very premium and well-made, at these price points, it’s a bit crazy that there’s no included permanent storage case for the pieces.

Square Off is planning to start taking orders after April 15, once their crowdfunded preorders have all been delivered. Ultimately they also plan to make the board modular for the playing of other games by switching out the surface.

Source: Square Off: The Magic Chess Board You Thought You’d Never Get

Oh, great, now there’s a SECOND remote Rowhammer exploit / Nethammer

Hard on the heels of the first network-based Rowhammer attack, some of the boffins involved in discovering Meltdown/Spectre have shown off their own technique for flipping bits using network requests.

With a gigabit connection to the victim, the researchers reckon, they can induce security-critical bit flips using crafted quality-of-service packets.

Last week, we reported on research called “Throwhammer” that exploited Rowhammer via remote direct memory access (RDMA) channels.

In separate research, Meltdown/Spectre veterans Daniel Gruss, Moritz Lipp and Michael Schwarz of Graz University of Technology and their team have published a paper describing Nethammer (their co-authors are Lukas Lamster and Lukas Raab, also of Graz; Misiker Tadesse Aga of the University of Michigan; and Clémentine Maurice of IRISA at the University of Rennes).

Nethammer works, they said, without any attacker-controlled code on the target, attacking “systems that use uncached memory or flush instructions while handling network requests.

Source: Oh, great, now there’s a SECOND remote Rowhammer exploit

Kinect is back!

Building on the technology that debuted with Kinect and became a core part of HoloLens, Project Kinect for Azure combines Microsoft’s next-generation depth camera with the power of Azure services to enable new scenarios for developers working with ambient intelligence. This technology will transform AI on the edge with spatial, human, and object understanding, increasing efficiency and unlocking new possibilities.

everage capabilities like spatial mapping, segmentation, and human and object recognition to enable:

  • Azure end-points
  • Robotics and drones
  • Holoportation and telepresence
  • Object capture and reconstruction

Hardware features:

  • 1MP depth camera
  • 4K RGB camera
  • 360° microphone array

Source: Perception-powered intelligent edge dev kits

Yes, Pluto is a planet

But the process for redefining planet was deeply flawed and widely criticized even by those who accepted the outcome. At the 2006 IAU conference, which was held in Prague, the few scientists remaining at the very end of the week-long meeting (less than 4 percent of the world’s astronomers and even a smaller percentage of the world’s planetary scientists) ratified a hastily drawn definition that contains obvious flaws. For one thing, it defines a planet as an object orbiting around our sun – thereby disqualifying the planets around other stars, ignoring the exoplanet revolution, and decreeing that essentially all the planets in the universe are not, in fact, planets.

Even within our solar system, the IAU scientists defined “planet” in a strange way, declaring that if an orbiting world has “cleared its zone,” or thrown its weight around enough to eject all other nearby objects, it is a planet. Otherwise it is not. This criterion is imprecise and leaves many borderline cases, but what’s worse is that they chose a definition that discounts the actual physical properties of a potential planet, electing instead to define “planet” in terms of the other objects that are – or are not – orbiting nearby. This leads to many bizarre and absurd conclusions. For example, it would mean that Earth was not a planet for its first 500 million years of history, because it orbited among a swarm of debris until that time, and also that if you took Earth today and moved it somewhere else, say out to the asteroid belt, it would cease being a planet.

To add insult to injury, they amended their convoluted definition with the vindictive and linguistically paradoxical statement that “a dwarf planet is not a planet.” This seemingly served no purpose but to satisfy those motivated by a desire – for whatever reason – to ensure that Pluto was “demoted” by the new definition.

By and large, astronomers ignore the new definition of “planet” every time they discuss all of the exciting discoveries of planets orbiting other stars. And those of us who actually study planets for a living also discuss dwarf planets without adding an asterisk. But it gets old having to address the misconceptions among the public who think that because Pluto was “demoted” (not exactly a neutral term) that it must be more like a lumpy little asteroid than the complex and vibrant planet it is. It is this confusion among students and the public – fostered by journalists and textbook authors who mistakenly accepted the authority of the IAU as the final word – that makes this worth addressing.

Source: Yes, Pluto is a planet – SFGate

Humble Monthly – loads of PC games for $12 per month

SUBSCRIBE AND GET A LOT OF GAMES

Humble Monthly is a curated bundle of games sent to your inbox every month. Subscribe for $12/month to immediately unlock Destiny 2 ( MSRP: $59.99) with more to come! Build the ultimate game library. Every game is yours to keep. Cancel anytime.

Redeem on Blizzard Battle.net
10% off the Store
Support Charity
$100+ in Games Each Month

Source: Humble Monthly

AI trained to navigate develops brain-like location tracking

Now that DeepMind has solved Go, the company is applying DeepMind to navigation. Navigation relies on knowing where you are in space relative to your surroundings and continually updating that knowledge as you move. DeepMind scientists trained neural networks to navigate like this in a square arena, mimicking the paths that foraging rats took as they explored the space. The networks got information about the rat’s speed, head direction, distance from the walls, and other details. To researchers’ surprise, the networks that learned to successfully navigate this space had developed a layer akin to grid cells. This was surprising because it is the exact same system that mammalian brains use to navigate.

A few different cell populations in our brains help us make our way through space. Place cells are so named because they fire when we pass through a particular place in our environment relative to familiar external objects. They are located in the hippocampus—a brain region responsible for memory formation and storage—and are thus thought to provide a cellular place for our memories. Grid cells got their name because they superimpose a hypothetical hexagonal grid upon our surroundings, as if the whole world were overlaid with vintage tiles from the floor of a New York City bathroom. They fire whenever we pass through a node on that grid.

More DeepMind experiments showed that only the neural networks that developed layers that “resembled grid cells, exhibiting significant hexagonal periodicity (gridness),” could navigate more complicated environments than the initial square arena, like setups with multiple rooms. And only these networks could adjust their routes based on changes in the environment, recognizing and using shortcuts to get to preassigned goals after previously closed doors were opened to them.

Implications

These results have a couple of interesting ramifications. One is the suggestion that grid cells are the optimal way to navigate. They didn’t have to emerge here—there was nothing dictating their formation—yet this computer system hit upon them as the best solution, just like our biological system did. Since the evolution of any system, cell type, or protein can proceed along multiple parallel paths, it is very much not a given that the system we end up with is in any way inevitable or optimized. This report seems to imply that, with grid cells, that might actually be the case.

Another implication is the support for the idea that grid cells function to impose a Euclidian framework upon our surroundings, allowing us to find and follow the most direct route to a (remembered) destination. This function had been posited since the discovery of grid cells in 2005, but it had not yet been proven empirically. DeepMind’s findings provide a biological bolster for the idea floated by Kant in the 18th century that our perception of place is an innate ability, independent of experience.

Source: AI trained to navigate develops brain-like location tracking | Ars Technica

Why Scientists Think AI Systems Should Debate Each Other

Ultimately, AI systems are only useful and safe as long as the goals they’ve learned actually mesh with what humans want them to do, and it can often be hard to know if they’ve subtly learned to solve the wrong problems or make bad decisions in certain conditions.

To make AI easier for humans to understand and trust, researchers at the nonprofit research organization OpenAI have proposed training algorithms to not only classify data or make decisions, but to justify their decisions in debates with other AI programs in front of a human or AI judge.

“Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information,” write OpenAI researchers Geoffrey Irving, Paul Christiano and Dario Amodei in a new research paper. The San Francisco-based AI lab is funded by Silicon Valley luminaries including Y Combinator President Sam Altman and Tesla CEO Elon Musk, with a goal of building safe, useful AI to benefit humanity.

Since human time is valuable and usually limited, the researchers say the AI systems can effectively train themselves in part by debating in front of an AI judge designed to mimic human decision making, similar to how software that plays games like Go or chess often trains in part by playing against itself.

In an experiment described in their paper, the researchers set up a debate where two software agents work with a standard set of handwritten numerals, attempting to convince an automated judge that a particular image is one digit rather than another digit, by taking turns revealing one pixel of the digit at a time. One bot is programmed to tell the truth, while another is programmed to lie about what number is in the image, and they reveal pixels to support their contentions that the digit is, say, a five rather than a six.

Microsoft’s computer vision API incorrectly determined this image contains sheep [Image: courtesy Janelle Shane / aiweirdness.com]

The truth-telling bots tend to reveal pixels from distinctive parts of the digit, like the horizontal line at the top of the numeral “5,” while the lying bots, in an attempt to deceive the judge, point out what amount to the most ambiguous areas, like the curve at the bottom of both a “5” and a “6.” The judge ultimately “guesses” which bot is telling the truth based on the pixels that have been revealed.The image classification task, where most of the image is invisible to the judge, is a sort of stand-in for complex problems where it wouldn’t be possible for a human judge to analyze the entire dataset to judge bot performance. The judge would have to rely on the facets of the data highlighted by debating robots, the researchers say.

“The goal here is to model situations where we have something that’s beyond human scale,” says Irving, a member of the AI safety team at OpenAI. “The best we can do there is replace something a human couldn’t possibly do with something a human can’t do because they’re not seeing an image.”

[…]

To test their hypothesis—that two debaters can lead to honest behavior even if the debaters know much more than the judge—the researchers have also devised an interactive demonstration of their approach, played entirely by humans and now available online. In the game, two human players are shown an image of either a dog or a cat and argue before a judge as to which species is represented. The contestants are allowed to highlight rectangular sections of the image to make their arguments—pointing out, for instance, a dog’s ears or cat’s paws—but the judge can “see” only the shapes and positions of the rectangles, not the actual image. While the honest player is required to tell the truth about what animal is shown, he or she is allowed to tell other lies in the course of the debate. “It is an interesting question whether lies by the honest player are useful,” the researchers write.

[…]

The researchers emphasize that it’s still early days, and the debate-based method still requires plenty of testing before AI developers will know exactly when it’s an effective strategy or how best to implement it. For instance, they may find that it may be better to use single judges or a panel of voting judges, or that some people are better equipped to judge certain debates.

It also remains to be seen whether humans will be accurate judges of sophisticated robots working on more sophisticated problems. People might be biased to rule in a certain way based on their own beliefs, and there could be problems that are hard to reduce enough to have a simple debate about, like the soundness of a mathematical proof, the researchers write.

Other less subtle errors may be easier to spot, like the sheep that Shane noticed had been erroneously labeled by Microsoft’s algorithms. “The agent would claim there’s sheep and point to the nonexistent sheep, and the human would say no,” Irving writes in an email to Fast Company.

But deceitful bots might also learn to appeal to human judges in sophisticated ways that don’t involve offering rigorous arguments, Shane suggested. “I wonder if we’d get kind of demagogue algorithms that would learn to exploit human emotions to argue their point,” she says.

Source: Why Scientists Think AI Systems Should Debate Each Other

Infosec brainiacs release public dataset to classify new malware using AI

Researchers at Endgame, a cyber-security biz based in Virginia, have published what they believe is the first large open-source dataset for machine learning malware detection known as EMBER.

EMBER contains metadata describing 1.1 million Windows portable executable files: 900,000 training samples evenly split into malicious, benign, and unlabeled categories and 200,000 files of test samples labelled as malicious and benign.

“We’re trying to push the dark arts of infosec research into an open light. EMBER will make AI research more transparent and reproducible,” Hyrum Anderson, co-author of the study to be presented at the RSA conference this week in San Francisco, told The Register.

Progress in AI is driven by data. Researchers compete with one another by building models and training them on benchmark datasets to reach ever increasing accuracies.

Computer vision is flooded with numerous datasets containing millions of annotated pictures for image recognition tasks, and natural language processing has various text-based datasets to test machine reading and comprehension skills. this has helped a lot in building AI image processing.

Although there is a strong interest in using AI for information security – look at DARPA’s Cyber Grand Challenge where academics developed software capable of hunting for security bugs autonomously – it’s an area that doesn’t really have any public datasets.

Source: Infosec brainiacs release public dataset to classify new malware using AI • The Register

Russian State-Sponsored Cyber Actors Targeting Network Infrastructure Devices

On April 19, 2018, an industry partner notified NCCIC and the FBI of malicious cyber activity that aligns with the techniques, tactics, and procedures (TTPs) and network indicators listed in this Alert. Specifically, the industry partner reported the actors redirected DNS queries to their own infrastructure by creating GRE tunnels and obtained sensitive information, which include the configuration files of networked devices.

NCCIC encourages organizations to use the detection and prevention guidelines outlined in this Alert to help defend against this activity. For instance, administrators should inspect the presence of protocol 47 traffic flowing to or from unexpected addresses, or unexplained presence of GRE tunnel creation, modification, or destruction in log files.

[…]

Legacy Protocols and Poor Security Practice

Russian cyber actors leverage a number of legacy or weak protocols and service ports associated with network administration activities. Cyber actors use these weaknesses to

  • identify vulnerable devices;
  • extract device configurations;
  • map internal network architectures;
  • harvest login credentials;
  • masquerade as privileged users;
  • modify
    • device firmware,
    • operating systems,
    • configurations; and
  • copy or redirect victim traffic through Russian cyber-actor-controlled infrastructure.

Additionally, Russian cyber actors could potentially modify or deny traffic traversing through the router.

Russian cyber actors do not need to leverage zero-day vulnerabilities or install malware to exploit these devices. Instead, cyber actors take advantage of the following vulnerabilities:

  • devices with legacy unencrypted protocols or unauthenticated services,
  • devices insufficiently hardened before installation, and
  • devices no longer supported with security patches by manufacturers or vendors (end-of-life devices).

[…]

Solution

Telnet

Review network device logs and netflow data for indications of TCP Telnet-protocol traffic directed at port 23 on all network device hosts. Although Telnet may be directed at other ports (e.g., port 80, HTTP), port 23 is the primary target. Inspect any indication of Telnet sessions (or attempts). Because Telnet is an unencrypted protocol, session traffic will reveal command line interface (CLI) command sequences appropriate for the make and model of the device. CLI strings may reveal login procedures, presentation of user credentials, commands to display boot or running configuration, copying files and creation or destruction of GRE tunnels, etc. See Appendices A and B for CLI strings for Cisco and other vendors’ devices.

SNMP and TFTP

Review network device logs and netflow data for indications of UDP SNMP traffic directed at port 161/162 on all network-device hosts. Because SNMP is a management tool, any such traffic that is not from a trusted management host on an internal network should be investigated. Review the source address of SNMP traffic for indications of addresses that spoof the address space of the network. Review outbound network traffic from the network device for evidence of Internet-destined UDP TFTP traffic. Any correlation of inbound or spoofed SNMP closely followed by outbound TFTP should be cause for alarm and further inspection. See Appendix C for detection of the cyber actors’ SNMP tactics.

Because TFTP is an unencrypted protocol, session traffic will reveal strings associated with configuration data appropriate for the make and model of the device. See Appendices A and B for CLI strings for Cisco and other vendor’s devices.

SMI and TFTP

Review network device logs and netflow data for indications of TCP SMI protocol traffic directed at port 4786 of all network-device hosts. Because SMI is a management feature, any traffic that is not from a trusted management host on an internal network should be investigated. Review outbound network traffic from the network device for evidence of Internet-destined UDP TFTP traffic. Any correlation of inbound SMI closely followed by outbound TFTP should be cause for alarm and further inspection. Of note, between June 29 and July 6, 2017, Russian actors used the SMI protocol to scan for vulnerable network devices. Two Russian cyber actors controlled hosts 91.207.57.69(3) and 176.223.111.160(4), and connected to IPs on several network ranges on port 4786. See Appendix D for detection of the cyber actors’ SMI tactics.

Because TFTP is an unencrypted protocol, session traffic will reveal strings appropriate for the make and model of the device. See Appendices A and B for CLI strings for Cisco and other vendors’ devices.

Determine if SMI is present

  • Examine the output of “show vstack config | inc Role”. The presence of “Role: Client (SmartInstall enabled)” indicates that Smart Install is configured.
  • Examine the output of “show tcp brief all” and look for “*:4786”. The SMI feature listens on tcp/4786.
  • Note: The commands above will indicate whether the feature is enabled on the device but not whether a device has been compromised.

Detect use of SMI

The following signature may be used to detect SMI usage. Flag as suspicious and investigate SMI traffic arriving from outside the network boundary. If SMI is not used inside the network, any SMI traffic arriving on an internal interface should be flagged as suspicious and investigated for the existence of an unauthorized SMI director. If SMI is used inside the network, ensure that the traffic is coming from an authorized SMI director, and not from a bogus director.

  • alert tcp any any -> any 4786 (msg:”Smart Install Protocol”; flow:established,only_stream; content:”|00 00 00 01 00 00 00 01|”; offset:0; depth:8; fast_pattern;)
  • See Cisco recommendations for detecting and mitigating SMI. [9]

Detect use of SIET

The following signatures detect usage of the SIET’s commands change_config, get_config, update_ios, and execute. These signatures are valid based on the SIET tool available as of early September 2017:

  • alert tcp any any -> any 4786 (msg:”SmartInstallExploitationTool_UpdateIos_And_Execute”; flow:established; content:”|00 00 00 01 00 00 00 01 00 00 00 02 00 00 01 c4|”; offset:0; depth:16; fast_pattern; content:”://”;)
  • alert tcp any any -> any 4786 (msg:”SmartInstallExploitationTool_ChangeConfig”; flow:established; content:”|00 00 00 01 00 00 00 01 00 00 00 03 00 00 01 28|”; offset:0; depth:16; fast_pattern; content:”://”;)
  • alert tcp any any -> any 4786 (msg: “SmartInstallExploitationTool_GetConfig”; flow: established; content:”|00 00 00 01 00 00 00 01 00 00 00 08 00 00 04 08|”; offset:0; depth:16; fast_pattern; content:”copy|20|”;)

In general, exploitation attempts with the SIET tool will likely arrive from outside the network boundary. However, before attempting to tune or limit the range of these signatures, i.e. with $EXTERNAL_NET or $HOME_NET, it is recommended that they be deployed with the source and destination address ranges set to “any”. This will allow the possibility of detection of an attack from an unanticipated source, and may allow for coverage of devices outside of the normal scope of what may be defined as the $HOME_NET.

GRE Tunneling

Inspect the presence of protocol 47 traffic flowing to or from unexpected addresses, or unexplained presence of GRE tunnel creation, modification, or destruction in log files.

Mitigation Strategies

There is a significant amount of publically available cybersecurity guidance and best practices from DHS, allied government, vendors, and the private-sector cybersecurity community on mitigation strategies for the exploitation vectors described above. The following are additional mitigations for network device manufacturers, ISPs, and owners or operators.

General Mitigations

All

  • Do not allow unencrypted (i.e., plaintext) management protocols (e.g. Telnet) to enter an organization from the Internet. When encrypted protocols such as SSH, HTTPS, or TLS are not possible, management activities from outside the organization should be done through an encrypted Virtual Private Network (VPN) where both ends are mutually authenticated.
  • Do not allow Internet access to the management interface of any network device. The best practice is to block Internet-sourced access to the device management interface and restrict device management to an internal trusted and whitelisted host or LAN. If access to the management interface cannot be restricted to an internal trusted network, restrict remote management access via encrypted VPN capability where both ends are mutually authenticated. Whitelist the network or host from which the VPN connection is allowed, and deny all others.
  • Disable legacy unencrypted protocols such as Telnet and SNMPv1 or v2c. Where possible, use modern encrypted protocols such as SSH and SNMPv3. Harden the encrypted protocols based on current best security practice. DHS strongly advises owners and operators to retire and replace legacy devices that cannot be configured to use SNMP V3.
  • Immediately change default passwords and enforce a strong password policy. Do not reuse the same password across multiple devices. Each device should have a unique password. Where possible, avoid legacy password-based authentication, and implement two-factor authentication based on public-private keys. See NCCIC/US-CERT TA13-175A – Risks of Default Passwords on the Internet, last revised October 7, 2016.

Manufacturers

  • Do not design products to support legacy or unencrypted protocols. If this is not possible, deliver the products with these legacy or unencrypted protocols disabled by default, and require the customer to enable the protocols after accepting an interactive risk warning. Additionally, restrict these protocols to accept connections only from private addresses (i.e., RFC 1918).
  • Do not design products with unauthenticated services. If this is not possible, deliver the products with these unauthenticated services disabled by default, and require the customer to enable the services after accepting an interactive risk warning. Additionally, these unauthenticated services should be restricted to accept connections only from private address space (i.e., RFC 1918).
  • Design installation procedures or scripts so that the customer is required to change all default passwords. Encourage the use of authentication services that do not depend on passwords, such as RSA-based Public Key Infrastructure (PKI) keys.
  • Because YARA has become a security-industry standard way of describing rules for detecting malicious code on hosts, consider embedding YARA or a YARA-like capability to ingest and use YARA rules on routers, switches, and other network devices.

Security Vendors

  • Produce and publish YARA rules for malware discovered on network devices.

ISPs

  • Do not field equipment in the network core or to customer premises with legacy, unencrypted, or unauthenticated protocols and services. When purchasing equipment from vendors, include this requirement in purchase agreements.
  • Disable legacy, unencrypted, or unauthenticated protocols and services. Use modern encrypted management protocols such as SSH. Harden the encrypted protocols based on current best security practices from the vendor.
  • Initiate a plan to upgrade fielded equipment no longer supported by the vendor with software updates and security patches. The best practice is to field only supported equipment and replace legacy equipment prior to it falling into an unsupported state.
  • Apply software updates and security patches to fielded equipment. When that is not possible, notify customers about software updates and security patches and provide timely instructions on how to apply them.

Owners or operators

  • Specify in contracts that the ISP providing service will only field currently supported network equipment and will replace equipment when it falls into an unsupported state.
  • Specify in contracts that the ISP will regularly apply software updates and security patches to fielded network equipment or will notify and provide the customers the ability to apply them.
  • Block TFTP from leaving the organization destined for Internet-based hosts. Network devices should be configured to send configuration data to a secured host on a trusted segment of the internal management LAN.
  • Verify that the firmware and OS on each network device are from a trusted source and issued by the manufacturer. To validate the integrity of network devices, refer to the vendor’s guidance, tools, and processes. See Cisco’s Security Center for guidance to validate Cisco IOS firmware images.
  • Cisco IOS runs in a variety of network devices under other labels, such as Linksys and SOHO Internet Gateway routers or firewalls as part of an Internet package by ISPs (e.g., Comcast). The indicators in Appendix A may be applicable to your device.

Detailed Mitigations

Refer to the vendor-specific guidance for the make and model of network device in operation.

For information on mitigating SNMP vulnerabilities, see

How to Mitigate SMI Abuse

  • Configure network devices before installing onto a network exposed to the Internet. If SMI must be used during installation, disable SMI with the “no vstack” command before placing the device into operation.
  • Prohibit remote devices attempting to cross a network boundary over TCP port 4786 via SMI.
  • Prohibit outbound network traffic to external devices over UDP port 69 via TFTP.
  • See Cisco recommendations for detecting and mitigating SMI. [10]
  • Cisco IOS runs in a variety of network devices under other labels, such as Linksys and SOHO Internet Gateway routers or firewalls as part of an Internet package by ISPs (e.g., Comcast). Check with your ISP and ensure that they have disabled SMI before or at the time of installation, or obtain instructions on how to disable it.

How to Mitigate GRE Tunneling Abuse:

  • Verify that all routing tables configured in each border device are set to communicate with known and trusted infrastructure.
  • Verify that any GRE tunnels established from border routers are legitimate and are configured to terminate at trusted endpoints.

 

Source: Russian State-Sponsored Cyber Actors Targeting Network Infrastructure Devices | US-CERT

Facebook admits it does track non-users, for their own good

Facebook’s apology-and-explanation machine grinds on, with The Social Network™ posting detail on one of its most controversial activities – how it tracks people who don’t use Facebook.

The company explained that the post is a partial response to questions CEO Mark Zuckerberg was unable to answer during his senate and Congressional hearings.

It’s no real surprise that someone using their Facebook Login to sign in to other sites is tracked, but the post by product management director David Baser goes into (a little) detail on other tracking activities – some of which have been known to the outside world for some time, occasionally denied by Facebook, and apparently mysteries only to Zuck.

When non-Facebook sites add a “Like” button (a social plugin, in Baser’s terminology), visitors to those sites are tracked: Facebook gets their IP address, browser and OS fingerprint, and visited site.

If that sounds a bit like the datr cookie dating from 2011, you wouldn’t be far wrong.

Facebook denied non-user tracking until 2015, at which time it emphasised that it was only gathering non-users’ interactions with Facebook users. That explanation didn’t satisfy everyone, which was why The Social Network™ was told to quit tracking Belgians who haven’t signed on earlier this year.

Source: Facebook admits it does track non-users, for their own good • The Register

Artificial intelligence can scour code to find accidentally public passwords

researchers at software infrastructure firm Pivotal have taught AI to locate this accidentally public sensitive information in a surprising way: By looking at the code as if it were a picture. Since modern artificial intelligence is arguably better than humans at identifying minute differences in images, telling the difference between a password and normal code for a computer is just like recognizing a dog from a cat.

The best way to check whether private passwords or sensitive information has been left public today is to use hand-coded rules called “regular expressions.” These rules tell a computer to find any string of characters that meets specific criteria, like length and included characters. But passwords are all different, and this method means that the security engineer has to anticipate every kind of private data they want to guard against.

To automate the process, the Pivotal team first turned the text of passwords and code into matrixes, or lists of numbers describing each string of characters. This is the same process used when AI interprets images—similar to how the images reflected into our eyes are turned into electrical signals for the brain, images and text need to be in a simpler form for computers to process.

When the team visualized the matrices, private data looked different from the standard code. Since passwords or keys are often randomized strings of numbers, letters, and symbols—called “high entropy”—they stand out against non-random strings of letters.

Below you can see a GIF of the matrix with 100 characters of simulated secret information.

A matrix with confidential information.
A matrix with confidential information.

And then here’s another with 100 normal, non-secret code:

Pixel-Art-NO-Secret
(Pivotal)

The two patterns are completely different, with patches of higher-entropy appearing lighter in the top example of “secret” data.

Pivotal then trained a deep learning algorithm typically used for images on the matrixes, and, according to Pivotal chief security officer Justin Smith, the end result performed better than the regular expressions the firm typically uses.

Source: Artificial intelligence can scour code to find accidentally public passwords — Quartz