Quantcast
AI

Watch a Computer Learn to Play ‘Doom’ Inside a Dream

Researchers got deep learning models to train inside their own hallucinated ideas of the world.

Jordan Pearson

Jordan Pearson

Image: id Software

The ability to practice real-world skills inside a dream and then use those skills in waking life is part bong water fantasy and part scientific enigma. Dr. Daniel Erlacher at the University of Bern in Switzerland, for example, has conducted numerous studies on the effects of practicing activities like squatting and dart throwing in a lucid dreaming state—the “waking dream” phenomenon—and then trying them in real life.

The research for humans continues, but computers are definitely able to dream up their own version of the world and learn skills inside it. Case in point: Researchers David Ha (of Google Brain, the search giant’s machine learning wing) and Jürgen Schmidhuber were able to get a machine to “hallucinate,” as they put it, its own idea of what the 1993 video game Doom looks like. Then, they got a virtual agent to play its own dream version of Doom to learn how to play the real thing.

Read More: Video Games Are So Realistic That They Can Teach AI What the World Looks Like

You can see what this looks like on a webpage the researchers put up on Tuesday. If you need a description, imagine someone filming a game of Doom off of a CRT computer monitor with a glob of Vaseline smeared on the lens.

The machine learning set up for this task had three components: First, a model that comes up with a compressed version of the game environment based on a snapshot (like a low bitrate MP3 or deep fried JPEG), and then another model that takes that information to output a probability distribution of what the next frame might look like. These two models, taken together, make up the virtual agent’s abstract view of the “world.” Finally, there’s a controller model that has access to the reward functions of the game to make choices about what to do next in the game based on the previous model’s predictions.

A machine dodging some dreamed-up fireballs. Image: WorldModels

All of these machine learning models, plugged into one another, allow a virtual agent to perceive a game world and play within it properly. But what if you could take a machine down inside of its own dream?

To do just this, Ha and Schmidhuber got their prediction model to sample its own predictions of the game state as a source for further predictions, creating an entirely imagined idea of the game world based on the real thing. The model was also given the ability to predict if the player dies in the next frame in addition to predicting the next frame itself, creating the conditions for a virtual agent to play and train inside this dream state that probabilistically recreates a machine’s idea of Doom (technically, a version of Doom for machine learning called VizDoom).

According to the researchers, this abstract approach to training could make it possible for the powerful engines that render AAA video games to, for example, quickly calculate complicated physics in the background. Besides that, though, it’s just trippy to look at and think about man.

Get six of our favorite Motherboard stories every day by signing up for our newsletter .