This course is for experienced C programmers who want to program in C++. The examples and exercises require a basic understanding of algorithms and object-oriented software.

Loading...

From the course by University of California, Santa Cruz

C++ For C Programmers, Part B

89 ratings

This course is for experienced C programmers who want to program in C++. The examples and exercises require a basic understanding of algorithms and object-oriented software.

From the lesson

Hex and the use of AI and C++ Move semantics

This module explains Min-Max and the Alpha-Beta algorithm for game playing. Its programming topics include C++ 11 Move semantics and a detailed example of referential garbage collection.

- Ira PohlProfessor

Computer Science

So we're going to delve into how to build

an AI into these game playing programs, such as HEX.

They're critical to making the games interesting to have an automated opponent.

That can play well.

And a huge part of that, in these very complicated

games that can't be solved, is to have some form of lookahead.

And the basic lookahead algorithm that we're

going to investigate, is called the minimax algorithm.

This is the standard model for many strategy games.

So if you're playing chess or checkers, backgammon.

There's a notion of, there are two players.

a player "A".

Player A goes somewhere. And then (player) B responds.

And then you continue until the game is over.

Now, when you have an artificial intelligence, or for that matter, a human

intelligence, how do you decide what move A should

make? A has a

choice, of however many legal moves are available on the board.

And that could vary quite a bit. It could vary from American

checkers, which generally has 20 or less moves.

chess games and where the average and the middle game position is

about 40 moves to go, where

in the middle game, there's well over 200 moves.

Each of those moves can't necessarily be investigated or evaluated

readily. And so there have to be means in the AI to

suggest what's called a plausible move, and then

think about what your opponent is going to respond.

And then have some means of evaluating the result.

And that's what I call a standard view of doing a strategy game.

And all of that involves Lookahead and the Lookahead is

in a graph and the graph is treated with a

minimax algorithm where the end points of the

algorithm and the end points of the tree.

Are evaluated with either the games known result if you could go to the end of

the game like you could readily in simple

tic-tac-toe, or in cases like chess or checkers.

There might be a way of coming

to a position.

For which you can apply very good judgement.

And in that position, you get what's called a static evaluation.

And then those evaluations are backed up to select what your best move is.

So here's such a tree. Classically, in the minimax tree.

These are notions that go back to the middle

of the 20th century with people like Von Neumann.

You have a maximizer, the maximizer is assumed to be a rational player.

The maximizer is trying to make a move.

That gets a maximum payoff. The opponent is a minimizer.

So the maximizer is trying to win the game, the opponent is trying to win the

game, the maximizer wants to make a move that wins.

The opponent wants to defeat that move and make the player lose.

So, there's this asymmetry.

Maximizer's going for a larger score. A minimzer is going for a smaller score.

So here's a tree in which maximizer has three

branches. And those branches have

in turn further moves. Those are the leaf moves.

So on the bottom, we have what I'm calling the leaf nodes,

and we got a value.

So here we see the maximizer could have a three or a ten.

The minimizer looking at a three or a ten clearly,

it's going to pick the three, this branch. Now, over here the minimizer

has no choice, so it's going to be a four. And over here there's a 5, 6 or 7.

The maximizer, looking at those three choices, is going to go this way.

So if the analysis of this game is correct,

we would expect the maximizer to make this third move and minimizer

to make it's first move, and that would be how the game would proceed.

And, we would invoke

this algorithm, in turn, every time the machine has to make a move.

So every position in which the machine is called on

to make a move, there's going to be some set of

legal

and plausible moves. What I'm calling plausible moves,

and this could be if the game is small enough, evaluate all.

Or they may be too many, depending on the complexity of the game.

So that's your basic minimax.

A

little terminology. Again, this is going to apply to all sorts

of intelligent games. All the games, role playing games, that

somehow use the machine to select moves over some combination of possibilities.

First we want to know how deep we go into the tree.

The deeper we can go into the tree, which is called the ply.

The more we can get towards an endpoint,

a way where we can have a accurate evaluation.

So, if we could go to all the way to

the end, we could find out who wins, loses or draws.

That would be optimum.

So, if the game was finite

and small enough, we could just run that tree to all the bottom nodes.

The bottom nodes would be determined by the rules

of the game in which a final result occurred.

As I say you could do that

tic-tac-toe.

And then you'd use minimax to back up and select.

In that case we would get a perfect game player.

And you can, as I say, in several games.

even some with some complexity, like Othello, or what was called

Reversi, which was a game of a fair amount of complexity.

And yet, oh, for 30 years now computers have been powerful enough

to exhaust that space in combination with some mathematical ideas on,

what positions really need to be investigated.

So there is a ply - a depth of the tree. I’ll give you an example.

If you’re playing a chess game and you’re playing, if you see how two world

champions play, the typical chess game runs

40 or 50 moves between two such grandmasters.

A move is actually in, in chess, two-ply, because

it's really what white makes, and then how black responds.

So, the chess game, an average chess game between two grand masters.

Generally runs 100 ply. Now think about that.

If you're running a game that's a 100 ply and you have typically 40 legal

moves. Well, you have 40

to the 100th nodes

to examine. At the bottom of that tree.

That's far too much to calculate.

So there's a game where a naive approach

would not allow us to exhaust the possibilities.

Other games, there might be a much smaller branching rate.

And, certainly, there is in small games, like a limited size tic-tac-toe game.

So, whatever you have, you typically have some

kind of exponential rate of growth in that tree.

And you have to practically constrain how much search you're going to do.

Okay. So let's get back to minimax.

We've seen how minimax works.

Take a second for this quiz. Here's a minimax tree.

Here are the terminal modes at the bottom. Where we've got the static

evaluations, 2, 5, 6, 1, 3, 9. Here's the

maximizer and minimizer,

maximizer. How do you go about

evaluating that? And here's

the answer.

Again, the maximizer has a choice between two

and five, it's going to take five. The

maximizer has a choice between six and one,

going to take six.

Maximizer has no choice in these two cases.

The minimizer has a choice between five and one, minimizer is going to take one.

The minimizer has no choice here, so it's got to be six, and

here, the minimizer has three possibilities and we'll take three.

And of course, the maximizer is going to come over here and take six.

So the value of the maximizer is fairly high.

And uses a fruitful path.

For the player going first. And the the expected

value in some sense of that move is a plus six.

notice in this tree, not everything ends up at the bottom level.

There are leaf nodes that are different places.

And we'll see, for example, in chess there are places

where you can, what are called static evaluation features where

you have finished a set of exchanges. Let's say you exchange

two pieces.

At that point, the position becomes what's called quiescent, quiet.

So, in a quiet position, it might, a quiet position might occur here.

But a position that's more dynamic, in which there are a

lot of captures or checks might go deeper into the tree.

So that's why not everything that's a leaf node is on the same level.

So you should be able to implement this, potentially, as part of, writing your AI

for your HEX game.

Coursera provides universal access to the world’s best education,
partnering with top universities and organizations to offer courses online.