So in game playing, since that first early conference in AI, people were interested in how can we play games better? The idea being that, the strategic interaction in a game is indicative of intelligence. So if they can study how to do that maybe we'd have learned something about intelligence. Checker players were the first ones to come out, and since then, there was a lot of attention on chess being the thing. You think of it, someone plays chess well, they're intelligent. So we want to mimic that. For the longest time, for decades and decades, computer chess players were not on par with a human grandmasters or even masters or something else. So you average a child who just picked up chess can be beaten by a computer, but anybody who had studied it could beat computers. In the 1990s, IBM funded Deep Blue, a project built a machine that beat Kasparov, the world champion. I don't think that was anticipated at the time. I think maybe people who knew the project knew that it was going to happen but certainly I don't think Kasparov anticipated it, and it was a bit of a surprise, and now these machines are fairly a common place, machines that can be certainly masters, but even grandmasters now. We don't have competitions between chess-playing programs and human beings because there's not an interesting competition. We have chess-playing programs that help human beings play better chess by letting them analyze movement to those sorts of things. So how did it happen? That happened by some advances in algorithms, some advances in understanding how you would play chess, and how to reason about it, and it also happened by just really fast computers that could basically consider lots of different possibilities very quickly and rid away the ones, and it doesn't get tired, and it doesn't make a mistake. You just press. It doesn't think long enough. So chess is not a solved game, in that we have not proven that if you start and both players played to the best of their abilities, it will always be a win for white, or always be a win for black, or always be a tie. However, computer players regularly beat human players. Since then, Checkers has been solved. That is, with the same sorts of techniques, we've computed what the optimal solution is for both sides, and what the results of the game would be if those sides played optimally. Throughout this entire discussion, people said, well, that's fine but those are simple games actually. So a game like Go has much more complexity to the possible positions of the board. There are many more ways that a Go board can vary than for instance, a chessboard. So it's a much more rich space in which to think about strategies and a much more rich space, and I think through at least 2000s or so people thought that really Go was a long way off to being solved. Yet, just recently, one of the best Go players was beaten by Google, the successor to the AlphaGo. I think this had been coming for a few years, but was a little bit of a surprise, and how did this happen? It happened by, again, a lot more compute power being applied. Google as a company has huge banks of computers to do all sorts of things but some of them are reserved for projects like this. Also, a better understanding maybe not to solve the game in this mathematical sense if everybody played optimally, but how you approximate through ideas for machine learning the various functions, and things you need in there and get very good approximations so that the resulting player can be quite good.