FYI.

This story is over 5 years old.

Tech

Neuromorphic Chips Offer Neural Networks That Actually Work Like the Brain

Image recognition in a snap.
Wu et al

The thing about neural networks is that they don't really behave much like the brain at all. It's a thin metaphor that managed to catch on and now whenever I say I'm training a neural network people assume that I'm into some real mystical shit. There's a sort-of comparison there in that the "neurons" of a neural network are nodes where input signals are mapped to output signals, but it's mostly a superficial likeness. It could be worse—at least it's not the "god algorithm" or anything.

Advertisement

Engineers at the University of Michigan are onto something rather more brainlike, however, with help from a peculiar electrical component known as a memristor. They've developed a new "sparse coding" algorithm that uses grids of memristors to approximate the pattern recognition abilities of mammalian brains. The result, which is described in the current Nature Nanotechnology, is potentially much faster image processing—or processing of any other very large datasets that currently require a lot of computing resources to deal with.

The hardware prototype described in the paper consists a 32 by 32 array of memristors. A memristor is basically a normal resistor (an electrical component that limits current) with a memory of sorts. Its resistance, or the amount of current it blocks, changes based on the voltages that have been applied to it in the past. (A normal resistor is going to stay about the same regardless of this voltage history.)

The upshot is that memristors offer a way of both processing and storing data at the same time, whereas a conventional architecture means fetching data to be fed into processing circuits from some (relatively) remote location in the computer. That's potentially a really big deal as some of our biggest problems in computing right now have to do with parallelism—building machines that can do lots of computations at the same time while dealing with the lag inherent in even the fastest memory technologies.

Advertisement

What the memristor architecture implemented here opens up is sparse coding. Generally, this is what allows animals like humans to make nigh instantaneous decisions about exactly it is we happen to be looking at. Code in this context refers not to computer code but to the specific patterns of activity found in the brain corresponding to specific objects. A property of this code is the fraction of neurons that are strongly activated given a particular stimulus (like a recognizable object). Sparse code is when this fraction is really small—when only a relative few neurons start lighting up but the brain is still able to make a solid determination.

"When we take a look at a chair we will recognize it because its characteristics correspond to our stored mental picture of a chair," study co-author Wei Lu offers in a statement. "Although not all chairs are the same and some may differ from a mental prototype that serves as a standard, each chair retains some of the key characteristics necessary for easy recognition. Basically, the object is correctly recognized the moment it is properly classified—when 'stored' in the appropriate category in our heads."

Exploiting sparse codes offers a way of short-circuiting conventional deep learning algorithms. Maybe we don't need to feed neural networks with huge datasets for every little visual learning task. Instead, the memristor circuit is able to take in data as it comes and start building up its internal codes right away rather than waiting around for the whole big dataset. And that is actually more like the brain, which fortunately for us is all about clever short-circuits.

"We need our next-generation electronics to be able to quickly process complex data in a dynamic environment," Lu says. "You can't just write a program to do that. Sometimes you don't even have a pre-defined task. To make our systems smarter, we need to find ways for them to process a lot of data more efficiently. Our approach to accomplish that is inspired by neuroscience."