AI can help neurologists automatically map the connections between different neurons in brain scans, a tedious task that can take hundreds and thousands of hours.
In a paper published in Nature Methods, AI researchers from Google collaborated with scientists from the Max Planck Institute of Neurobiology to inspect the brain of a Zebra Finch, a small Australian bird renowned for its singing.
Although the contents of their craniums are small, Zebra Finches aren’t birdbrains, their connectome* is densely packed with neurons. To study the connections, scientists study a slice of the brain using an electron microscope. It requires high resolution to make out all the different neurites, the nerve cells extending from neurons.
The neural circuits then have to be reconstructed by tracing out the cells. There are several methods that help neurologists flesh these out, but the error rates are high and it still requires human expertise to look over the maps. It’s a painstaking chore, a cubic millimetre of brain tissue can generate over 1,000 terabytes of data.
“A recent estimate put the amount of human labor needed to reconstruct a 1003-µm3 volume at more than 100,000 h, even with an optimized pipeline,” according to the paper.
Now, AI researchers have developed a new method using a recurrent convolutional neural network known as a “flood-filling network”. It’s essentially an algorithm that finds the edges of a neuron path and fleshes out the space in between to build up a map of the different connections.
Here’s a video showing how they work.
“The algorithm is seeded at a specific pixel location and then iteratively “fills” a region using a recurrent convolutional neural network that predicts which pixels are part of the same object as the seed,” said Viren Jain and Michal Januszewski, co-authors of the paper and AI researchers at Google.
The flood-filling network was trained using supervised learning on a small region of a Zebra Finch brain complete with annotations. It’s difficult to measure the accuracy of the network, and instead the researchers use a “expected run length” (ERL) metric that measures how far it can trace out a neuron before making a mistake.
Flood-filling networks have a longer ERL than other deep learning methods that have also been tested on the same dataset. The algorithms were better than humans at identifying dendritic spines, tiny threads jutting off dendrites that help transmit electrical signals to cells. But the level of recall, a property measuring the completeness of the map, was much lower than data collected by a professional neurologist.
Another significant disadvantage of this approach is the high computational cost. “For example, a single pass of the fully convolutional FFN over a full volume is an order of magnitude more computationally expensive than the more traditional 3D convolution-pooling architecture in the baseline approach we used for comparison,” the researchers said.