FYI.

This story is over 5 years old.

Tech

RIP David Rumelhart, Godfather of Connectionism

The cognitive scientist David E. Rumelhart, who sadly "died last week":http://www.nytimes.com/2011/03/19/health/19rumelhart.html of an Alzhemier’s-like disease, put forward some ground-breaking ideas about cognition in the ‘80s.

When you think of your friend, you don't just think of your friend's face. You think of their name, their voice, their stature, or perhaps a recent anecdote about that time last week you and your friend got drunk and faked British accents in a Toledo dive bar.

The neuroscience behind the seemingly simple mental phenomenon of connecting and organizing memories has long baffled – and still baffles – cognitive scientists. What is going on in the brain when we categorize things we learn? How is it that my friend Kate is ingrained in my memory in so many disparate manners – from her hair color to her annoying habits – yet still remains a single psychological concept, "Kate?"

Advertisement

Neuroscience is young and it's still working on questions as fundamental as the molecular biology behind the communication between two neurons, let alone the 80 billion some-odd neurons in the human brain. So, a question such as "How do we learn to associate memories?" is like asking a doctor how Keith Richards somehow didn't die in a pool of vomit and pharmaceuticals 40 years ago…We just don't know the answer.

Of course, there are some well-trod theories concerning these major psychological mysteries. The cognitive scientist David E. Rumelhart, who sadly died last week of an Alzhemier's-like disease, put forward some ground-breaking ideas in the '80s.

Rumelhart, and his colleague Jay McClelland, are the godfathers of "Connectionism." Connectionism is an approach to cognitive science that characterizes learning and memory through the discrete interactions between specific nodes of vast neural networks. Wait, what the hell does that mean?

In Rumelhart's model of learning, bits of information from the environment are each represented by little units. Let's go back to our friend Kate – her eye color, unit one; her height, unit two; her accent, unit three. While these "units" may receive information from different sensory pathways (you "see" her eye color, you "hear" her accent, etc), all the bits mingle with each other in that grand ballroom of the mind – our memory. Like the parallel intersecting strands of a spider web, the units interconnect. But the key is that in Rumelhart's neural networks, the strength of these connections varies based on experience – the more we look at our friend Kate, the stronger the connection between "Kate" and "blue eyes;" The more we talk to our friend Kate, the stronger the connection between "Kate" and "Southern accent," and so on. As certain connections are strengthened, the memory becomes sharper until Kate is a fully formed category in our mind.

And the "Kate-network" connects to other such networks (maybe the "Fred" network, or the "friends from college" network), those strengths also change over time, and voilà. we have abstract learning.

OK, OK, sounds a little too easy. Admittedly, it is a bit too easy – but the fundamental tenets of the theory have been tested and yield some interesting results. Rumelhart's theory of neural networks is a go-to for researchers working in artificial intelligence (Rumelhart himslef showed how they worked by building computer simulations), especially those designing robots that learn and adapt independent of human commands. NASA, for example, has used artificial neural networks to program the Mars rover so it can learn and adapt to unknown terrain on its own [go here for a nice interactive description of Rumelhart’s networks].

The main rival of connectionism, which certainly deserves as much attention, is computationalism, which focuses on “learning rules” rather than plastic networks. These ideas are championed by scientists like Steven Pinker, who argues that certain learning algorithms are innate, implying a brain that learns things it is genetically coded to learn rather than acting as the all-absorbing sponge of a connectionist neural network.

Rumelhart was 68, is survived by two children, four grandchildren, and has a prize named after him for “contributions to the theoretical foundations of human cognition.”