FYI.

This story is over 5 years old.

Tech

Neuroscientists Have a New Computational Model for Memory

An explanation for how neurons harness complexity for implementing long- and short-term memory.

Neuroscientists at Columbia University have devised a new computational model for memory formation in the brain. The model, which is described in a paper published Monday in Nature Neuroscience, is another attempt at solving an old problem in understanding memory: How do neurons remain pliable enough to form new memories while being rigid enough to protect the old ones from erasure?

In neuroscience, this is known as the plasticity–rigidity dilemma. In the words of Columbia researchers Marcus Benna and Stefano Fusi: "Synapses that are highly plastic are good at storing new memories but poor at retaining old ones. Less plastic synapses are good at preserving memories but poor at storing new ones."

Advertisement

The brain's solution is, broadly, complexity.

Memories are thought to be stored among the synapses that serve as conduits for electrical impulses traveling from neuron to neuron. In early models, each of these connections was imagined to be governed by a dial that boosts the relative connection strength of a given synapse. Crank one of these dials way up and the strength of the electrical signal passing between a pair of neurons is increased. Memories are then encoded as maps of dials representing varying strengths.

This model allows for an enormous memory capacity, but only if we allow the synaptic dials to be twisted indefinitely. When we start saying that there's a limit to how high or low a dial can be set, then it turns out to not work very well at all. "Theoretical analyses revealed that ignoring the limits on synaptic strengths imposed on any real biological system, which had appeared to be a harmless assumption in the calculations, was actually a serious flaw," Fusi and Benna write.

Biology has limits, after all.

It didn't take long for a different, more complex picture to emerge. Rather than just a single dial, what if synapses could be imagined as arrays of multiple dials? It would be as if the same synapses could simultaneously serve as both RAM (fast, volatile memory) and hard-drive memory (slow, long-term). "This form of synaptic complexity allows extended memory lifetimes without sacrificing the initial memory strength," the paper explains, "accounting for our remarkable ability to remember for long times a large number of details even when memories are learned in one shot."

Unfortunately, the multiple dials thing (wherein "dials" are possibly represented by clusters of molecules) doesn't work so well out of the box. Things change, however, when we imagine that the dials—which, again, are governing the strength of connections among neurons, e.g. their plasticity—influence each other. If I were to crank up one dial, boosting the short-term plasticity of a neuron, that increase would gradually turn up another dial, accounting for a long-term memory. Through time, the dials all equalize. To lay a metaphor on top of a metaphor, imagine the dials instead as beakers full of liquid, as below. Add some liquid to one beaker and, eventually, the levels all normalize because they're ultimately connected.

This connection is what allows for both the sharp relief of short-term memories and the incredible staying power of long-term memories. This model allows for memory lifetimes that scale in concert with increasing numbers of synapses. The resulting storage capacity looks a lot like real-world memory in the human brain.

At the end of the day, we're still just talking about a mathematical model, but there are some agreements with biology. As the paper notes, experiments have shown that synaptic connections are not unitary and consist of multiple phases across multiple molecules. Mapping the "dials" of the model to molecular processes will remain difficult, however, just because we don't know very much about the processes to begin with.