The VICE Channels


    The Chess Engine that Died So AlphaGo Could Live

    Written by Rollin Bishop

    AlphaGo, a computer program designed by Google DeepMind, has just one more game to go against top-ranked Go player Lee Sedol in Seoul, South Korea, in a five-game match reminiscent of the 1997 showdown between Deep Blue and Garry Kasparov.

    AlphaGo’s success so far is unprecedented, no matter the outcome of the final match. The fact that we've moved from teaching computers chess to training them on how to play the more complex game of Go shows the advances computer scientists have made in programming intelligence. But what happens to the chess programs of yesteryear?

    The rise of AlphaGo contributed directly to the demise of at least one very interesting chess engine: Giraffe, which was developed by Matthew Lai as part of his advanced computing thesis at Imperial College London.

    Giraffe ranked somewhere in the middle of the rankings for chess engines, but outclassed many of its contemporaries. While most chess engines simply run through all the possible moves sequentially as quickly as possible, Giraffe plays by simulating intuition. That's a romantic way of saying it doesn’t just brute force the calculations, but instead relies on patterns learned from chess games played by humans.

    But on January 21—six days before DeepMind revealed the AlphaGo engine—Lai announced that he was discontinuing his creation. The reason? He’d been been hired by Google DeepMind.


    What, exactly, is machine learning? At its most basic, the definition is fairly literal: It is when a machine learns how to do something by studying others doing it. Artificial intelligence, on the other hand, is “a synthetic thing that performs behavior that any reasonable person believes might have required intelligence,” in the words of Georgia Institute of Technology associate professor Mark Riedl.

    Machine learning can be considered a subfield of artificial intelligence, but it has come to encompass so much that it’s arguably a field of its own. Machine learning informs artificial intelligence, but it is not what most people think of when they think of AI.

    “Imagine that we wanted to create a machine capable of playing a video game,” said Drexel University assistant professor Santiago Ontanon. For example, the original Super Mario Bros. video game. “One way we could do this is by sitting down and writing a computer program that can play the game. This approach, however, might be very time consuming. The machine learning approach would consist of recording several hours of humans playing the game, and then showing this to the machine, who would then analyze this data, extract common patterns, and discover where they are applied in order to successfully play.”

    “I can only legally use what's public knowledge in Giraffe, but the real world is not so clear-cut.”

    The big difference between the two approaches is that in the first, which does not use machine learning, the computer has to be explicitly programed to play the game. With the second, which does use machine learning, the computer is shown some examples of how to play, and learns automatically from that.

    Lai started by programming Giraffe with the rules of chess and a set of data from real chess matches. Next, Giraffe started playing itself, using the real chess matches as a starting point and recording the outcomes. At the end of any given match, Giraffe essentially gave the moves used by the winner a more favorable rating because those moves ultimately turned out to be good. The program then used those better moves in the next match—not unlike how a person learning chess might emulate their opponents after seeing strategies that worked. Gradually, Giraffe got better at chess.

    This differs from programs like Deep Blue. Deep Blue’s decisions are essentially pure calculation: going through every possible move, picking out the best one based on an internal value assigned to it based on piece value as well as positional advantage, and then making that move. Simply put, Deep Blue’s evaluating the entire board at once. Highly advanced, and massively complicated, but still calculation.


    Late last year, Giraffe was making waves among the tech crowd thanks in no small part to a flattering article in the MIT Technology Review titled "Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level."

    The story, which described Lai's accomplishment as "a world first," explained the layers of Giraffe's neural network: "The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on. The second looks at piece-centric features such as the location of each piece on each side, while the final aspect is to map the squares that each piece attacks and defends."

    With only its initial training, Giraffe could equal or beat conventional chess engines. Its skill put it within the top 2.2 percent of tournament chess players, even before it started learning from experience.

    "The cool thing is that Giraffe can actually beat many engines searching five to ten times as fast as it does," Lai told me when I spoke with him for Popular Mechanics in September of last year. "That means it will sometimes lose games in complex tactical positions, but it plays more like humans in terms of positional understanding."

    At the time, Lai was excited to try and get Giraffe up to a grandmaster level. Not an impossible task, but a difficult one. There’d been other attempts at chess engines taking advantage of machine learning, but, according to Lai, none of them were as successful as Giraffe. He points to KnightCap and NeuroChess—the former by Andrew Tridgell, Jonathan Baxter, and Lex Weaver and the latter by Sebastian Thrun—which are grounded in machine learning but lack the computing power and sophistication of Giraffe.

    That's why his announcement on January 21 was something of a shock. “It is with great sadness that I am announcing the discontinuation of the Giraffe project,” his forum post begins. “I am not sure why I am making a post about it instead of just having it silently fade away... I suppose it is to give myself some closure.”

    It wasn't because he'd lost interest in the project, and not entirely because he wasn't sure where to go, he explained. It was because he'd simply learned too many trade secrets from working at DeepMind, and any side work he did on Giraffe would inevitably violate the terms of his contract with the company formerly known as Google.

    “I can only legally use what's public knowledge in Giraffe,” he said in a forum post, “but the real world is not so clear-cut.”

    Figuring out what is and is not public knowledge presents a legal nightmare for working on passion projects related to machine learning, he explained. If he were to continue work on Giraffe, he wasn’t sure he could do so in a way that wasn’t going to get him in trouble. In short, he could no longer be certain what the public did and did not know about machine learning.


    Lai’s situation isn’t exactly new. Academic researchers, and those having just graduated with advanced degrees like Lai, have long been sought out and employed by large businesses with investments in those sectors to further corporate goals. DeepMind itself was acquired by Google's parent company Alphabet back in 2014. Sebastian Thrun, the man behind NeuroChess, helped develop Google Street View and worked on driverless cars at the secretive Google X lab after developing an award-winning driverless car design at Stanford. He’s since left the company, but spent a significant chunk of time furthering Google’s interests.

    That’s just one example, but it applies more generally across a variety of technical fields related to engineering and more. It can be more lucrative to move from academia to private employment, but it also complicates things. Intellectual property can become a legal minefield with murky boundaries that sometimes includes stipulations that any advancement in the field ultimately belongs to the company. For Lai, it meant abandoning an impressive chess engine he’d been working on prior to his employment—one which likely weighed in his favor when getting hired to begin with.

    This week’s podcast brings you two stories about how humans interact with artificial intelligence. Radio Motherboard is available on iTunes and all podcast apps.

    Giraffe is open source, so anyone’s capable of tinkering with it should they have the desire and ability. Lai said if he ever leaves DeepMind, he might well return to Giraffe, but it's not likely to happen soon. He seems happy where he is, though he also appears to believe he left his previous work unfinished.

    “I hope someone will at least continue exploring using [machine learning] in chess,” he wrote, even if that means in some other direction from Giraffe. “There is still much potential in [machine learning and] chess that is begging to be discovered.” When contacted about his decision to discontinue Giraffe, he reiterated much of the forum post while also explaining that he was happy to pursue other interests instead.

    It’s hard to fault Lai here for his decision to put it all behind him. There’s no guarantee that an updated version of Giraffe—with Lai relegated to using only his own resources—would have given the better engines that rely on brute-force calculation a run for their money, and now he’s off doing work with Google DeepMind on projects that will almost certainly shape how we talk and think about machine learning, artificial intelligence, and more going forward. That certainly sounds like a win.

    But it’s also hard not to wonder: what if Lai had continued tinkering instead? Or, perhaps even more radical, what if Alphabet had simply made it easier for people to work with their advances in machine learning without fear of legal reprisal? Considering that, maybe the situation is more of a stalemate.