FYI.

This story is over 5 years old.

Tech

Neuroscientists Have a New Algorithm for Simplifying the Brain's Deep Complexity

The human mind is, after all, a statistics problem.
Image: rat brain cells/Gerry Shaw

Understanding the brain means far more than merely mapping it. To know is more than to just see: As a map of a city gives only limited information about that city, mapping neurons and neuron activity in the brain likewise gives only limited information about how the brain functions. A new review paper published in the journal Nature Neuroscience, courtesy of a team based at Carnegie Melon University and Columbia University, makes this argument while also describing an emerging class of machine learning algorithms, based on "dimensionality reduction," geared toward a deeper, more expansive view of the brain.

Advertisement

Conventional analytic methods tend to only examine a couple of neurons at a time, according to the current paper's authors, rather than the interactions over vast networks that actually underlie brain functioning. It's nice to imagine that such small-scale observations will just cascade on their own toward full-brain functionality, which is sort of the basic premise of the Blue Brain Project and neuro-optimism in general, but it's hardly that easy.

"One of the central tenets of neuroscience is that large numbers of neurons work together to give rise to brain function," said Carnegie Mellon's Byron M. Yu in a statement. "However, most standard analytical methods are appropriate for analyzing only one or two neurons at a time. To understand how large numbers of neurons interact, advanced statistical methods, such as dimensionality reduction, are needed to interpret these large-scale neural recordings."

Most standard analytical methods are appropriate for analyzing only one or two neurons at a time

There's nothing particularly magical about dimensionality reduction. It's a method devised to parse situations involving a very large number of variables (or dimensions) that something is subject to—a sound way of dealing with complexity that isn't also oversimplifying. Tasks like pattern or facial recognition are very difficult for computers because of the number of variables (high dimensionality) involved. As more and more data points are introduced into an algorithm, the processing power needed goes through the roof, as does statistical noise. And this is more or less the cutting edge of computing.

Advertisement

Dimensionality reduction algorithms operate under the premise that, underneath all of those dimensions, there is some sort of core process that can be derived that still does a good enough job of describing the more complex process. So, say that you, as a computer, had an image of a human face. There are a great many data points needed to describe that face (imagine a map of 3D coordinates) and, if you were to rotate that image, all of those data points would be changing too and soon enough, you would be very overwhelmed by information. If you started with, say, 100 data points and sampled the face 100 times per second, after five seconds of rotation, you-the-computer will have 50,000 pieces of information to contend with/process.

Dimensionality reduction recognizes that all of those pieces of information are really just subject to one thing, a single degree of freedom. What might look like an insurmountable pile of data is really, secretly just reducible to this one variable. It sounds obvious, but within the world of machine learning algorithms, not so much. The rotating face is pretty clear, but out in the real world, hunting down the subset of relevant variables is just that: hunting.

Maybe you can already see how this applies in the brain, a structure of daunting-to-say-the-least complexity. In particular, this sort of algorithm might help neuroscientists decode the activities underlying more "hidden" brain processes—those not involving sensory interaction with the outside world—where this subset of latent variables might be used to trace out the pathways of such otherwise cloaked thoughts. (For fans of linear algebra, the concept is a bit like an eigenvalue: a single operator with manifold effects.)

"Linear dimensionality reduction can be used for visualizing or exploring structure in data, denoising or compressing data, extracting meaningful feature spaces, and more," John P. Cunningham, the current paper's co-author, wrote earlier this year.

"One of the major goals of science is to explain complex phenomena in simple terms," said Cunningham in today's statement. "Traditionally, neuroscientists have sought to find simplicity with individual neurons. However, it is becoming increasingly recognized that neurons show varied features in their activity patterns that are difficult to explain by examining one neuron at a time. Dimensionality reduction provides us with a way to embrace single-neuron heterogeneity and seek simple explanations in terms of how neurons interact with each other."

Finally, understand that the basic idea here is of taking one of the most complex structures imaginable and reducing it to a simpler, smaller realm that nonetheless describes the vast complexity pretty well. That's something we can do. Ain't stats wonderful?