After conquering several dozen 2D video games in the past, Google’s artificial intelligence arm, DeepMind, has added another dimension. The company’s researchers have shown in a new paper published Friday that its algorithms are now capable of navigating 3D virtual spaces—specifically, a maze game and a simple racing game—with impressive accuracy.
The company’s technology “succeeds on a wide variety of continuous motor control problems as well as on a new task involving finding rewards in random 3D mazes using a visual input,” the researchers wrote in the paper. This technology builds on a paper published last year which showed that the programs DeepMind built could play 49 different Atari games. In 23 of them, it beat professional human players.
DeepMind’s AI plays video games as though it were a human, with no access to the game’s internal specs.
“The only information they [the algorithm] get is the pixels and the game score and the goal they’ve been told is to maximize their score. But apart from that they have no idea what kind of game they’re playing, what their controls do, or what they’re controlling in the game,” Demis Hassabis, a co-founder of DeepMind, told Nature in a video.
DeepMind’s artificial intelligence technology relies on a “deep neural network” made up of layers of connections, referred to as nodes, which sort through sensory information. The network then is able to create patterns of data which are meaningful, rather than just jumbled together. This is very similar to what happens in the human brain.
The most interesting feature of this kind of technology is its ability to learn, and to improve over time. Neural networks are the reason that Apple’s Siri will improve the more you talk to her, for example.
A video showing DeepMind’s AI playing a simple racing game.
It might seem confusing why it’s exciting that DeepMind is able to use its technology to navigate video games, since for decades computers have been able to beat human champions at games like chess.
The difference is that those technologies, including IBM’s Deep Blue, weren’t able to learn, and instead relied on large, fixed databases of knowledge. 3D games, where players are expected to navigate large virtual spaces, also contain significantly more data than fixed endeavors like chess do.
Powerful AI that is capable of “sight” could have important implications later down the line for things like driverless cars, a project that Google has expressed interest in. It also might be possible for tech of this variety to help analyze video footage, which is already happening.
DeepMind’s technology might also have big implications for jobs, since it could make a whole host of gigs automated in the future, from stock trading to predicting the weather. The company has formed partnerships with satellite operators and financial institutions “to see whether his [Hassabis’] A.I. could eventually 'play' their data sets, perhaps learning to make weather predictions or trade oil futures,” reported Nicola Twilley in The New Yorker.
There’s no reason for alarm yet, however. DeepMind’s algorithms are certainly powerful, but they’ve yet to obtain the kind of intelligence that even young children are capable of. For now, only humans can beat very complex 3D games, like Grand Theft Auto. But, if games like Go are more your fancy, DeepMind might just be catching up.