Researchers Are Building AI to Replace Video Game Testers
Flickr/Juhan Sonin

FYI.

This story is over 5 years old.

Tech

Researchers Are Building AI to Replace Video Game Testers

It’s all about the right kind of randomness.

Computers have been getting really good at playing and even building video games, from simple platformers to more complex titles like Mortal Kombat. Now, researchers are looking at how machines could take care of one of the most excruciating challenges in modern game development, for humans at least: bug testing.

Bug testing, as Motherboard showed in its look at the development behind Gears of War 4 last year, is extremely tedious. Teams of human testers have to go through a game multiple times and try absolutely everything. That includes weird stuff like firing a pistol off into the distance, just to see what happens. Something random like that might cause the landscape to glitch out, and that needs to be fixed. Bugs can be so numerous and tough to repair that they never get fixed at all. It's all about time, and time is money.

Advertisement

That's why Julian Togelius, a New York University computer science professor who's known for using Super Mario Bros. to train AI on games, published a paper he co-authored on using AI to play-test games on the arXiv preprint server. The focus was on what he calls "fine-tuning," that is, getting AI at different skill levels to try out all the game's various possibilities to test its limits.

"Doing random things is the easiest thing in the world for a computer," Togelius wrote me in an interview conducted over email. "It's more important to do the right random things."

Read More: Why Artificial Intelligence Researchers Love 'Super Mario Bros.'

Play-testing using an algorithm needs to hit a balance between trying out every possible action in a decision space—the randomness Togelius wrote about above—but also doing the things that an actual player conceivably might. The play-testers of today are human, after all, and these games are supposed to be resilient against human players.

"Humans often play very [differently] from algorithms," Togelius explained. "For example humans have much slower response times than agents, [and] often take uncalled-for or gratuitous actions and so on—but humans also often have [a] much longer planning horizon than algorithms. One way of learning to play games in a human-like fashion is to train them (partly) on data from human playthroughs."

Togelius and his colleagues have a lot of experience in this regard. In 2013, he co-authored a paper that trained an algorithm on player data so that it could "learn" how to play Super Mario Bros. like an actual person would.

Advertisement

In his most recent paper on using algorithms for game-testing, the example that he used was a relatively simple arcade-style game. Could algorithms one day be used to test extremely complex games like, say, modern shooters or open-world role playing games? "Definitely," Togelius answered.

It'll take a lot more work, but algorithms have already proved their mettle against humans in games like Unreal Tournament. It might not be such a leap to have a team of algorithms play-testing the next Doom title before release.

"One outstanding research issue is how to make sure [the algorithms] are sufficiently diverse in their play-style, so you can find all issues in a game, not just a few," Togelius wrote.

While still an emerging field, AI that can take over from humans in video game development is already poised to do everything from populating game-worlds with objects and textures to designing levels. Now, it might be time to pray for the game testers, who could see their jobs automated too.

Subscribe to pluspluspodcast , Motherboard's new show about the people and machines that are building our future.