Quantcast
Neuromorphic Circuits Don't Just Simulate the Brain, They Outrun It

Researchers unveil a proof-of-concept 100 neuron hardware nanobrain.

Actually simulating a human brain is a long, long way off—much longer than many neuro-enthusiasts would like us to think. But that doesn't mean powerful, practical neuromorphic computing is quite so distant.

Potentially opening up a whole new avenue of neuro-sim research, scientists at the University of California, Santa Barbara published a paper this week describing the first successful attempt at building a ground-up neuromorphic circuit with the ability to complete practical tasks.

To emphasize: This is not an algorithm or software model, as is usually considered within neuro-computational research, but the bare circuit itself, consisting of metal-oxide semiconductors (CMOS) and somewhat exotic components known as memristors. The result? A 100 neuron brain featuring some limited but promising visual recognition abilities. The circuit is described in the current issue of Nature.

Read more: The Incredible Challenge of Digitizing the Human Brain

The difference between the UCSB approach and probably most of what we hear about brain simulation—the Blue Brain Project, for example—is that one might argue that this isn't even properly a simulation. It's hardware, not virtualization. The axons and synapses aren't algorithms, they're synthesized axons and synapses.

The difference may seem a bit philosophical, but what will ultimately work, what will succeed at re-creating human-like intelligence, is hardly settled. Hardware vs. software implementations of neuromorphic technology matter a great deal, particularly as we start to consider making these things do useful computational work.

"The problem of software approaches [to brain simulation] is that they are inherently slow and, if run [on a] conventional computer, would consume many orders of magnitude of energy more as compared to [a] human being performing similar tasks," Dmitri Strukov, the lead author behind the new paper, told me.

"The problem is that conventional computer architecture does not fit well [with the] distributed and low precision computations which are performed in neural networks," Strukov explained. "This typically serves as a motivation to use other computing platform like [field-programmable gate arrays] and GPUs and ultimately create application specific circuits for artificial neural networks."

Here we have 100 neurons with the ability to visually classify three letters of the alphabet ("z", "v," and "n"), which is cool, but that's the first 100 neurons of a hardware brain requiring perhaps a hundred billion more to approximate a human brain. All told, that means as many as one quadrillion synapses.

Nonetheless, this is an important proof-of-concept of the potential of memristor components in future neuromorphic technologies, particularly hybrid neuromorphic networks—also known as CrossNets—which is what the UCSB brain is. Here, "wires play the parts of axons and dendrites and memristors mimic biological synapses," the paper explains.

Image: Sonia Fernandez

"The simple, two-terminal, transistor-free topology of metal-oxide memristors may enable CrossNets to achieve extremely high density—much higher than that of pure-CMOS neuromorphic networks and even higher than that of their biological prototypes," the UCSB researchers continue. CrossNet-based brains should be able to achieve 25 million cells per square-centimeter, with each one of those cells consisting of 104 synapses, which is a density higher than that of the human cerebral cortex, with comparable connectivity.

CrossNet brains are also theoretically faster than their biological counterparts, offering a intercell signal transfer delay of about 0.02 ms, which handily beats the 10 ms delay of a biological brain.

The advantage of a CrossNet brain above other synthetic brains largely has to do with its analog abilities. Computer processors, pretty much all of them, rely on transistors, which store information on discrete on/off states as switches. A memristor, however, deals with information in a continuous state, as the flow and direction of an ionic current—a stream of charged atoms, ions, rather than the electrons of an electric current—leaves a sort of impression on the component, which is reflected in its changing resistivity properties. So, the memristor remembers not just on or off, but a whole spectrum of states in between; try to imagine the fluctuating high-water line of some river or beach.

"There are large number of memory states (conductances) that memristor can be programmed to," Strukov explained, "essentially a continuum of states and not just two as in digital memories."

In a Nature commentary accompanying the Strukov group's paper, Robert Legenstein, a theoretical computer scientist studying computational complexity and the brain at Graz University of Technology, offers further explanation.

"Nearly all contemporary computational devices are based on a design known as the von Neumann architecture," Legenstein writes. "The Achilles heel of this incredibly successful approach is the separation of computation and memory: although data are manipulated in the central processing unit, they are stored in a separate random-access memory. Any operation therefore involves the transfer of data between these components. Known as the von Neumann bottleneck, this renders the computation inefficient."

"An alternative model is offered by the architecture of the brain," he continues, "in which computation and memory are highly intermingled. The 'program'—which includes previously observed data and memories—is stored in the strengths of synaptic connections directly adjacent to the neuronal processing units."

It turns out that memristors do an excellent job of mimicking these connections. Where algorithms based on conventional computer architectures struggle—visual tasks, speech recognition, and coordinating muscles and limbs, in particular—memristor-based networks should excel.

The next step, according to Strukov and his team, is integrating CrossNets with conventional semiconductors, which will allow the devices to do more varied and complex tasks. 100 neurons may not seem like much, but consider that it's already enough to accomplish a task (image recognition) that conventional AIs find prohibitively difficult. Imagine trying to manage a quadrillion more connections.

"In the future," Legenstein concludes, "laptops, mobile phones and robots could include ultra-low-power neuromorphic chips that process visual, auditory and other types of sensory information."