A pair of computer scientists have created a neural network that can self-replicate.
“Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems,” they argue in a paper popped onto arXiv this month.
It’s an important process in reproduction for living things, and is an important step for evolution through natural selection. Oscar Chang, first author of the paper and a PhD student at Columbia University, explained to The Register that the goal was to see if AI could be made to be continually self improving by mimicking the biological self-replication process.
“The primary motivation here is that AI agents are powered by deep learning, and a self-replication mechanism allows for Darwinian natural selection to occur, so a population of AI agents can improve themselves simply through natural selection – just like in nature – if there was a self-replication mechanism for neural networks.”
The researchers compare their work to quines, a type of computer program that learns to produces copies of its source code. In neural networks, however, instead of the source code it’s the weights – which determine the connections between the different neurons – that are being cloned.
The researchers set up a “vanilla quine” network, a feed-forward system that produces its own weights as outputs. The vanilla quine network can also be used to self-replicate its weights and solve a task. They decided to use it for image classification on the MNIST dataset, where computers have to identify the correct digit from a set of handwritten numbers from zero to nine.
The test network required 60,000 MNIST images for training, another 10,000 for testing. And after 30 runs, the quine network had an accuracy rate of 90.41 per cent. It’s not a bad start, but its performance doesn’t really compare to larger, more sophisticated image recognition models out there.
The paper states that the “self-replication occupies a significant portion of the neural network’s capacity.” In other words, the neural network cannot focus on the image recognition task if it also has to self-replicate.
“This is an interesting finding: it is more difficult for a network that has increased its specialization at a particular task to self-replicate. This suggests that the two objectives are at odds with each other,” the paper said.
Chang explained he wasn’t sure why this happened, but it’s what happens in nature too.