A demonstration of how simple neural circuits can be hacked to solve complicated problems.
C. elegans is not much of an organism. The tiny transparent roundworm is a simple creature, which is putting it mildly. Its tiny brain consists of just 302 neurons and it’s one of the smallest known organisms to have a nervous system at all. This makes it a very appealing subject for neuroscientists and artificial intelligence researchers. It remains the only organism to have its entire neural circuitry fully mapped out and described.
For AI folks, the upshot is that C. elegans provides an opportunity to do AI stuff to representations of living brains—program something alive, in other words.
In a paper posted recently to the arXiv preprint server, a trio of Austrian computer scientists led by Ramin Hasani describe an algorithm that exploits a known C. elegans neural circuit for the purposes of, basically, teaching a simulated worm (wormbot, henceforth) to balance a pole on its tail, to borrow a phrase from Mike James at I Programmer. This balancing act is otherwise known as the inverted pendulum problem.
The existing circuit being exploited here is known as the tap-withdrawal (TW) circuit. Its utility is all in the name: The worm touches or is touched by something and it squirms away. Hasani and colleagues noted that this reflex is kind of similar to a classic problem in control theory: the inverted pendulum. In the inverted pendulum problem, the task is to take a pole with something heavy on one end, and then connect the other end to a cart of some sort, or a vehicle that can move around. The challenge is in keeping the pole balanced vertically by moving around the cart at the bottom. This is sometimes used as a benchmark in testing control algorithms.
The researchers thought that if they could tune the synapses between the wormbot’s neurons in just the right away, the wormbot would be able to balance the pendulum. To find this optimal tuning, Hasani and his team used a machine learning technique called reinforcement learning. In reinforcement learning, an algorithmic agent learns by methodically interacting with its environment. In other words, the algorithm will try different things in an attempt to minimize error.
In general, the researchers found that this works about as well as similar machine learning approaches to the inverted pendulum problem, but with the key twist that their algorithm is constructed from hacked wormbot brains. One problem with the technique was that their worm tended to “drift” in one direction as it moved around trying to balance the pendulum. Consequently, it would in some cases run out of free space.
This has a really interesting dimension. Worm brains aren’t meant for implementing difficult control theory problems; after all, this circuitry developed to keep C. elegans from running into things. So, we can easily imagine other alternative ways of parameterizing simple neural circuits to do very unsimple things.