So here is that key insight again, this remarkable insight that just using random moves on the board and then evaluating a ratio of play, two idiots playing, gives you an accurate evaluation of making that move. That's the deep insight that happened 2006 with this quite successful breakthrough Go program. And indeed, it underpins a lot of modern thinking in AI, whereas in older AI there was this strong idea of modeling how human experts do things. And so in the older days, in the Newell, Simon and Shaw days of Carnegie Mellon, the understanding was well, we'll get a great physician, think of the House on TV. We'll get a great diagnostician like him by getting a hold of the human and finding out what rules they use. And instead the modern day Googles and Amazons call lots of information, big data, do lots of things at random. And then discover equally deep relationships through these many trials and this huge amount of computation. So AI has had a huge shift, it's not that the old cognitive modeling strategy is thrown out the window. It's that in many instances where it's too difficult and we don't yet understand how the model properly, how thinking works, the human perspective works. We can still get that result or better by computer models, things that humans wouldn't be able to do. Just processing vast amounts of information and even using random trials. Now, this will have things you still have to, in your programming, keep in mind, you're going to be doing big trees and big tables. Remember, you have an 11 by 11 board, potentially you're going to have an n by n board, and people frequently play the game with larger boards. So you should be able to play the game with boards up to the size of Go board even. And that may require a big analysis in big trees and tables. And you want, if you're going to do this all in C++, to make sure that you're not creating a lot of extra objects. If you're doing everything within a graph-based data structure, your constructors can be doing lots and lots of work. And you may have to specialize the graph structure to take advantage of efficiencies that you can have, because the hex graph is a very specialized graph. The same thing will be set of the Dijkstra algorithm. There's no reason to use the generic arbitrary cost Dijkstra algorithm. Because the hex graph that we're looking for the shortest path is a path from one side of the board to the other. So we have to decide if one or the other has one, and that means finding a path from one side of the board to the other. And if you have a size 11 board, that path is going to be at least will take at least ten edges to go from one side to the other. But you can actually require more edges, because it can snake around the board. And it's an interesting exercise to see what would be the longest possible search path in a game. And the way you can find that efficiently is either specializing Dijkstra or to use the classic algorithm, union find. And I encourage you to go and look on the Internet for the description of the union find algorithm. So you want to do huge amounts of trials and you don't want to get bogged down by doing this poorly. So the other thing that you're learning out of this, is how to be very careful when you're dealing with large aggregates, very careful. Memory is not an infinite resource, it's really easy to do a correct program that's inefficient. And you'll just worry, why is my program running so slow? How can people get 1,000 trials per move? They have 120 moves, they have to make 1,000 trials. Now you're talking about 120,000 trials and evaluations. You do that poorly on your computer and it will be hours before you get an evaluation.