Scientists develop the next generation of reservoir computing

A relatively new type of computing that mimics the way the human brain works was already transforming how scientists could tackle some of the most difficult information processing problems.

Now, researchers have found a way to make what is called reservoir computing work between 33 and a million times faster, with significantly fewer computing resources and less needed.

In fact, in one test of this next-generation reservoir computing, researchers solved a complex computing problem in less than a second on a .

Using the now current state-of-the-art technology, the same problem requires a supercomputer to solve and still takes much longer, said Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University.

[…]

Reservoir computing is a machine learning algorithm developed in the early 2000s and used to solve the “hardest of the hard” computing problems, such as forecasting the evolution of dynamical systems that change over time, Gauthier said.

Dynamical systems, like the weather, are difficult to predict because just one small change in one condition can have massive effects down the line, he said.

One famous example is the “butterfly effect,” in which—in one metaphorical illustration—changes created by a butterfly flapping its wings can eventually influence the weather weeks later.

Previous research has shown that reservoir computing is well-suited for learning dynamical systems and can provide accurate forecasts about how they will behave in the future, Gauthier said.

It does that through the use of an artificial neural network, somewhat like a human brain. Scientists feed data on a dynamical network into a “reservoir” of randomly connected artificial neurons in a network. The network produces useful output that the scientists can interpret and feed back into the network, building a more and more accurate forecast of how the system will evolve in the future.

The larger and more complex the system and the more accurate that the scientists want the forecast to be, the bigger the of artificial neurons has to be and the more computing resources and time that are needed to complete the task.

One issue has been that the reservoir of artificial neurons is a “black box,” Gauthier said, and scientists have not known exactly what goes on inside of it—they only know it works.

The artificial neural networks at the heart of reservoir computing are built on mathematics, Gauthier explained.

“We had mathematicians look at these networks and ask, ‘To what extent are all these pieces in the machinery really needed?'” he said.

In this study, Gauthier and his colleagues investigated that question and found that the whole reservoir computing system could be greatly simplified, dramatically reducing the need for computing resources and saving significant time.

They tested their concept on a forecasting task involving a weather system developed by Edward Lorenz, whose work led to our understanding of the .

Their next-generation reservoir computing was a clear winner over today’s state—of-the-art on this Lorenz forecasting task. In one relatively simple simulation done on a desktop computer, the new system was 33 to 163 times faster than the current model.

But when the aim was for great accuracy in the forecast, the next-generation reservoir computing was about 1 million times faster. And the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons, compared to the 4,000 needed by the current-generation model, Gauthier said.

An important reason for the speed-up is that the “brain” behind this next generation of reservoir computing needs a lot less warmup and training compared to the current generation to produce the same results.

Warmup is training data that needs to be added as input into the reservoir computer to prepare it for its actual task.

[…]

“Currently, scientists have to put in 1,000 or 10,000 data points or more to warm it up. And that’s all data that is lost, that is not needed for the actual work. We only have to put in one or two or three data points,” he said.

[…]

In their test of the Lorenz forecasting task, the researchers could get the same results using 400 data points as the current generation produced using 5,000 or more, depending on the accuracy desired.

[…]

A reservoir computing system for temporal data classification and forecasting

More information: Next generation reservoir computing, Nature Communications (2021). DOI: 10.1038/s41467-021-25801-2

Source: Scientists develop the next generation of reservoir computing

FBI Had REvil’s Kaseya Ransomware Decryption Key for Weeks

The Kaseya ransomware attack, which occurred in July and affected as many as 1,500 companies worldwide, was a big, destructive mess—one of the largest and most unwieldy of its kind in recent memory. But new information shows the FBI could have lightened the blow victims suffered but chose not to.

A new report from the Washington Post shows that, shortly after the attack, the FBI came into possession of a decryption key that could unlock victims’ data—thus allowing them to get their businesses back up and running. However, instead of sharing it with them or Kaseya, the IT firm targeted by the attack, the bureau kept it a secret for approximately three weeks.

The feds reportedly did this because they were planning an operation to “disrupt” the hacker gang behind the attack—the Russia-based ransomware provider REvil—and didn’t want to tip their hand. However, before the FBI could put its plan into action, the gang mysteriously disappeared. The bureau finally shared the decryption key with Kaseya on July 21—about a week after the gang had vanished.

[…]

Source: FBI Had REvil’s Kaseya Ransomware Decryption Key for Weeks: Report

Database containing 106m Thailand travelers’ details over the past decade leaked

A database containing personal information on 106 million international travelers to Thailand was exposed to the public internet this year, a Brit biz claimed this week.

Bob Diachenko, head of cybersecurity research at product-comparison website Comparitech, said the Elasticsearch data store contained visitors’ full names, passport numbers, arrival dates, visa types, residency status, and more. It was indexed by search engine Censys on August 20, and spotted by Diachenko two days later. There were no credentials in the database, which is said to have held records dating back a decade.

[…]

Diachenko said he alerted the operator of the database, which led to the Thai authorities finding out about it, who “were quick to acknowledge the incident and swiftly secured the data,” Comparitech reported. We’re told that the IP address of the exposed database, hidden from sight a day after Diachenko raised the alarm, is still live, though connecting to it reports that the box is now a honeypot.

[…]

We’ve contacted the Thai embassy in the US for further comment. Diachenko told The Register a “server misconfiguration” by an IT outsourcer caused the database to be exposed to the whole world.

[…]

Additionally, it’s possible that if you’ve traveled to Thailand and stayed there during the pandemic, you’ve already been leaked. A government website used to sign foreigners up for COVID-19 vaccines spilled names and passport numbers in June.

Additionally, last month, Bangkok Airways was hit by ransomware group LockBit resulting in the publishing of passenger data. And in 2018, TrueMove H, the biggest 4G mobile operator in Thailand, suffered a database breach of around 46,000 records.

Comparitech said the database it found contained several assets, in addition to the 106 million records, making the total leaked information come to around 200 GB.

Source: Database containing 106m Thailand travelers’ details leaked • The Register