Motherboard spoke with poker pro Jason Les about how he's preparing for a high-stakes Texas Hold'em rematch against an artificial intelligence program developed at Carnegie Mellon.
Jason Les is a seasoned professional poker player, though even he would say there's not much he can do to prepare for his next match.
Usually, Les would scrutinize videos of upcoming opponent to learn their playing style and analyse their previous poker hands. "You have the opportunity to get an understanding of your future opponent's strategy and develop a counter-strategy," Les told Motherboard.
But Les' next match, a heads-up no-limit Texas Hold'em match that will be played alongside three other professional poker players over the course of 20 days, isn't a typical one. Accompanied by poker stars Dong Kim, Jimmy Chou, and Daniel McAuley, Les will be pitted against an artificial intelligence program developed by researchers at Carnegie Mellon University. The grand prize for beating the AI? A share of $200,000 and the satisfaction of having bested a machine—for the second time.
The Brains Vs. Artificial Intelligence: Upping the Ante match, which begins today at Rivers Casino in Pittsburgh, is actually a rematch of an original 2015 AI experiment instigated by CMU. The ultimate goal, according to CMU computer scientists involved in the program, is the same as it was in the first match: setting a new benchmark for AI.
In the 2015 match, CMU's previous version of the AI program, called Claudico, played 80,000 hands against four pros, including Les and Kim, but collected fewer chips than three of them. Ultimately, the results of the match weren't decisive enough to determine whether CMU's AI was superior to a human poker pro.
But in true 'new year, new me' style, the AI is back with a name change and enhanced capabilities, ready to play 120,000 hands at River Casino. Now called Libratus, the CMU AI has far more computing power than Claudico. "We're pushing on the supercomputer like crazy," Tuomas Sandholm, professor of computer science, told CMU, explaining how his team needed 15 million core hours of computation to build Libratus, compared to Claudico's three million.
For Les, the rematch is a daunting prospect. "Playing this new AI, there's nothing to observe and study to prepare. It's an unknown competitor that they have never put in the field. So it's very exciting to see what this new system will play like," he said.
Still, Les said he learned lessons from 2015's match, and will use them in the rematch. One of the main strategies Les said he has to utilize is shedding the instinct of anticipating how the opponent (the AI, in this case) will perceive his strategy, and just focus on maximum exploitation. "It sounds easy, but it's an instinct built on years of experience and last time I really didn't let go of it as quickly as I should," Les said. Libratus will be playing the match with the nash-equilibrium strategy—that of knowing a change in tactic won't help with a win, as both players are assumed to be aware of their competitor's strategy.
"When you're playing an AI who is attempting to play a nash-equilibrium strategy, your goal has to be finding any weakness and exploiting that as hard as possible," said Les. "This week I think I have a much better focus on doing maximum exploitation on an opponent like this and making the most out of any weakness that we can (hopefully) find."
Poker isn't the only game AI researchers are trying to master, though. Almost two decades after IBM's Deep Blue whopped chess champion Garry Kasparov, Google's AlphaGo AI beat 18-time world Go champion Lee Sedol last March in a five-game match held in South Korea. Just this month, AlphaGo was secretly released as a player called Master on Tygem and FoxGo online Go servers, causing a buzz when it won 50 games straight. The AlphaGo team later fessed up to training their AI against real players, and said that they're excited by what the Go community can learn from the successful moved played by latest version of AlphaGo.
And there are even competitive AI turf wars within the world of poker itself. Rival academics at the University of Alberta this week published a paper on their program called DeepStack, which the researchers claim can beat professional poker players. "In a study involving dozens of participants and 44,000 hands of poker, DeepStack becomes the first computer program to beat professional poker players in heads-up no-limit Texas hold'em," reads their paper. Motherboard reached out to one of the paper's authors, Michael Bowling, to ask about the program but was told he's unable to comment until the paper, first published on the Arxiv pre-print server, is peer reviewed. Still, in a previous paper, Bowling and his team found success in beating human players at limit hold-em poker, a slight variation on the no-limit version.
"The AI's strategy is impossible to emulate"
But whichever AI Les is playing against, he sees the task as not a challenge to the traditions of a human game, but rather a an opportunity to engage with a learning tool that can make human players even better.
"The AI's strategy is impossible to emulate as a human sitting down at a poker table, but I think it will open people's minds to a lot of different things they can try when they play," Les said. "This isn't necessarily great news for me as a professional, but it will be interesting and probably entertaining to see people try to emulate the AI's strategy if they find it appealing."
Get six of our favorite Motherboard stories every day by signing up for our newsletter.