Quantcast
DeepMind Invented a Computer That Learns How to Use Its Own Memory

Like, whoa man.

DeepMind, Alphabet's artificial intelligence development wing, published its third research paper in Nature on Wednesday, and it's a doozy: the team invented a new kind of AI that actually learns how to use its own memory. They call it a "Differential Neural Computer," or DNC for short.

But what does it even mean for a computer to "learn" how to use its memory banks, and why should anyone care? Well, for one, it could help AI to become more powerful and useful than ever before—say, to help you navigate a complex commute in a new city with minimal hassle.

The first thing to know is that deep learning, a highly advanced form of machine learning, is made up of a "neural network"—an interconnected network of "nodes" that all run semi-random computations on input data. Neural networks keep doing this over and over, rearranging themselves along the way, until they can reliably produce an accurate result. This is called training.

Read More: When AI Goes Wrong, We Won't Be Able to Ask It Why

So far, this approach has had some incredible successes. Deep learning was the technique that allowed DeepMind's AlphaGo computer to defeat world Go champion Lee Sedol at his own game.

But a major drawback is that neural networks can't learn how to do a second task without rewriting themselves and forgetting how to do the first one. This is called "catastrophic forgetting." The computer you're using to read this article doesn't have that problem, because it has an external memory that it can write, rewrite, and recall from.

DeepMind's DNCs bridges this gap. Basically, they gave a neural network an external memory unit, and it taught itself how to use it from scratch through the same trial-and-error process that regular neural networks use. The DeepMind team put it like this in a blog post on their site:

"When a DNC produces an answer, we compare the answer to a desired correct answer," the post states. "Over time, the controller learns to produce answers that are closer and closer to the correct answer. In the process, it figures out how to use its memory."

The results are impressive. After feeding the entire subway network of London, England into one of these DNCs, the computer was able to answer complex questions that required a bit of what we might describe as deductive reasoning.

For example, here's one question the DNC could answer: "Starting at Bond street, and taking the Central line in a direction one stop, the Circle line in a direction for four stops, and the Jubilee line in a direction for two stops, at what stop do you wind up?"

Just try asking Siri something that complicated. Chances are it'll end up telling you when the next Bond movie is slated for release.

The cherry on top of this sort of powerful problem-solving, the DeepMind team stated, is that DNCs are able to store learned facts and techniques and then call upon them when needed.

As with any work on AI at this early stage, it's just one step towards machines that can be useful in our daily lives. But it's a hell of a step.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.