FYI.

This story is over 5 years old.

Tech

Google’s DeepMind Is Teaching AI How to Think Like a Human

If we’re ever going to create general AI, we need to teach it to think like us.
Image: Wikimedia Commons

Last year, for the first time, an artificial intelligence called AlphaGo beat the ranking human champion in a game of Go. This victory was both unprecedented and unexpected, given the immense complexity of the Chinese board game. While AlphaGo's victory was certainly impressive, this artificial intelligence, which has since beat a number of other Go champions, is still considered "narrow" AI—that is, a type of artificial intelligence that can only outperform a human in a very limited domain of tasks.

Advertisement

So even though it might be able to kick your ass at one of the most complicated board games in existence, you wouldn't exactly want to depend on AlphaGo for even the most mundane daily tasks, like making you a cup of tea or scheduling a tuneup for your car.

In contrast, the AI often depicted in science fiction is called "general" artificial intelligence, which means that it has the same level and diversity of intelligence as a human. While we already have artificial intelligences that can do everything from diagnose diseases to drive our Ubers, figuring out how to integrate all these narrow AIs into a general AI has proven challenging.

Read More: Why An AI-Judged Beauty Contest Picked Nearly All White Winners

According to two new papers released last week, DeepMind researchers at this secretive Alphabet subsidiary are now laying the groundwork for a general artificial intelligence. While they're not there yet, the initial results are promising: in some areas, the AI even managed to surpass human abilities.

The subject of both DeepMind papers (published here and here on arXiv) is relational reasoning, a critical cognitive faculty that allows humans to draw comparisons between a number of distinct objects or ideas, such as whether one object is larger or is positioned to the left of another.

Humans put relational reasoning to use pretty much any time they try to solve a problem, but researchers haven't figured out how to endow AI with this deceptively simple ability.

Advertisement

Researchers at DeepMind took two different approaches. One trained a neural net—a type of AI architecture modeled on the human brain—on a dataset of simple, static 3D objects called CLEVR. The other neural net was tasked with understanding how a 2D object changed over time.

Image: DeepMind/arXiv

In CLEVR, a neural network would be presented with a suite of simple objects, such as pyramids, cubes and spheres. Researchers would then pose relational questions in natural language to the AI, such as "is the cube the same material as the cylinder?" Amazingly the researchers reported that the neural net was able to correctly assess relational attributes in CLEVR 95.5 percent of the time, which surpassed its human benchmark at 92.6 percent accuracy.

In the second test, DeepMind researchers created a neural net called the Visual Interaction Network (VIN), which it trained to predict the future states of an object in a video based on its past motion. To do this, the researchers first fed the VIN three sequential frames of a video, which it used to produce a state code. This state code is a list of vectors—an object's velocity or position—for each object in the frame. Then, the VIN was fed a sequence of state codes, which were combined to allow the network to predict a state code for the next frame.

To train the VIN, the researchers used five different types of physical systems in which 2D objects moved across "natural-image backgrounds" and interacted with a variety of forces. For example, in one physical system the researchers modeled objects that were interacting with one another according to Newton's Law of gravity. In another, the neural net was presented with a billiards game and had to predict the future position of the balls.

According to the researchers, their VIN was incredibly successful and outperformed state-of-the-art video prediction models.

The work is an important step toward a general AI, but there's still a lot more to be done before artificial intelligences will be able to take over the world. And as Harvard computational neuroscientist Sam Gershman—who recently wrote a paper on what is needed to achieve general AI—told MIT Technology Review, "super-human performance on any particular machine learning task does not imply super-human intelligence."

Not yet, anyway.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.