Tech

DeepMind's 'Starcraft II' AI Is Now Better Than 99.98% of Human Players

AlphaStar achieved Grandmaster status the old fashioned way: grinding through matches with human opponents to move up the leaderboard.
DeepMind's 'Starcraft II' AI Is Now Better Than 99.98% of Human Players
Image: Blizzard

DeepMind has trained an AI agent that plays StarCraft II at the Grandmaster level—better than 99.98 percent of human players, the machine-learning-focused Google spinoff announced in a new paper

AlphaStar, the AI in question, achieved its Grandmaster status the old fashioned way: grinding through dozens of matches against human opponents, the paper, published on Wednesday in Nature, explains.

This isn’t DeepMind’s first foray into designing AI to play StarCraft II. AlphaStar took on human opponents in a series of high profile games in January. AlphaStar crushed its human competition, but the AI was still operating with some restrictions—it only knew how to play Protoss, one of Starcraft II’s three factions, and could only play against Protoss. But now those restrictions are gone. AlphaStar can handle Zerg, Protoss, and Terran.

Advertisement

AlphaStar worked its way up the European StarCraft II ladder on Battle.net, Blizzard’s online gaming network. In a July blog post, Blizzard informed players competing in Europe that they may encounter the bot, but that it would remain anonymous during games, and allowed players to opt out of competing against AlphaStart. “The majority of players opted in,” researchers noted in their article.

The AI system was designed to match the physical limitations of a human opponent. AlphaStar viewed each match through what researchers called a “camera-like interface” similar to how humans see the game’s playing field, meaning that the AI didn’t have perfect, omnipresent knowledge of the game and had to “choose” where to focus its attention.

Its actions per minute (APM) were also restricted, meaning it could only perform a set amount of tasks every second. AlphStar could only complete 22 non-duplicate actions in a five second window, putting it in league with human players. DeepMind also added a 110 millisecond delay between observing a frame and executing an action, which accounted for human latency.

Human pros who played against the AI approved of this approach. “While AlphaStar has excellent and precise control, it doesn’t feel superhuman—certainly not on a level that a human couldn’t theoretically achieve,” Dario ‘TLO’ Wünsch, a Team Liquid Starcraft II pro, said in a statement. “Overall, it feels very fair—like it is playing a ‘real’ game of StarCraft.”

Advertisement

DeepMind spokespeople weren’t immediately available to comment.

AlphaStar is based on a deep learning architecture that “learns” to complete tasks—in this case, play Starcraft II well—after hoovering up a ton of data.

DeepMind trained AlphaStar in phases, the researchers explain in the paper. It started by making the AI “watch” 971,000 StarCraft II replays. Each replay involved players with a high Match Making Ranking (MMR)—the number Blizzard assigns to a player based on their skill level. Because of this, AlphaStar was watching replays from the top 22% of StarCraft II players. After AlphStar watched those replays, DeepMind added 16,000 more replays from games with players at even higher MMRs.

Then, DeepMind created a league full of AI challengers and pit AlphaStar against itself to get even better at the game. To create a diverse opponent base, the team froze instances of AlphaStar at various points in its training and entered them into the game as new opponents.

Throughout this process, DeepMind peeled off instances of AlphaStar and send them to Battle.net for evaluation using anonymous accounts. The first AlphaStar entered Battle.net after watching replays, but before participating in an AI league, and played 30 games. A more developed AlphaStar version was pulled out of competition after 50 games because, the paper notes, its cover was blown.

The last iteration—AlphaStar Final—used several fake accounts to help keep the AIs anonymous. It was this AI that achieved Grandmaster level, the highest ranking available in StarCraft II.

According to StarCraft II pros, more exciting than the ranking are the ways AlphaStar Final innovated on existing tactics and strategy.

“It was also exciting to see the agent develop its own strategies differently from the human players—like the way AlphaStar builds more workers than its base can support early in the game in preparation for later expansion,” Team Liquid player Grzegorz “MaNa” Komincz said in a statement. “The caps on the actions it can take and the camera view restrictions now make for compelling games—even though, as a pro, I can still spot some of the system’s weaknesses."

DeepMind’s researchers believe StarCraft II is a stepping stone to greater and more complex problems. AlphaStar’s ability to handle a game as complicated as StarCraft II is a sign that artificial intelligence may be able to manage complex problems, such as navigating America’s roads in a self-driving car.

“The game’s complexity is much greater than chess, because players control hundreds of units; more complex than Go, because there are 10^26 possible choices for every move; and players have less information about their opponents than in poker,”David Silver, principal research scientist at DeepMind, said in a statement.