In this video, I'm going to tell you about how to define formally, the

Imperfect Information Extensive Form and how to reason about stretegies in these

case. So, in the Perfect Information Extensive

Form, we have a player action at every single choice node in the game.

And a consequence of that definition is that we're seeing that players knows what

node they're in all the time. And that means, they know the whole

history of all of the moves that have happened before in the game.

This is reasonable for some games, like chess, where you really do get to see

what your opponent does in every different move.

It's not reasonable for games like Battleship, where your play, the, your

opponent can do something, and you don't get to see what it is.

And it matters to, to what happens to you later in the game.

It can matter to your payoff. It can matter to whether the game stops

or continues. So, in order to model this richer

situation where players aren't able to observe everything that their opponents

do, we're going to add something new to the game to the game representation.

So we, we're going to call it the imperfect information extensive-form.

And the way this is going to work is we're going to take the old definition

that we had before, but we're going to say that players consider some choice

nodes to be equivalent to each other. So there's some choice nodes that a

player isn't able to tell apart. And that's going to mean they're not able

to completely figure out the history of where they are in the tree because

they're not going to know which of several choice nodes they're at when they

have to take a choice. And we're going to do this by taking the

set of choice nodes for a given player, and putting them into equivalence

classes. So, what that means is, kind of

schematically, if these are some different choice node in the game, that

all belong to the same player. We might say these are one equivalence

class, these are another equivalence class and these are a third equivalence

class. And what that would is, the player

Wouldn't know which of these two choice nodes I, he was at when he was asked to

make a choice. but he would know that he was at one of

these two and not one of these two because they're in different equivalence

classes. So, let's say that a little bit more

formally. So, to formally define an imperfect

information extensive form game. We start with a perfect-information

extensive-form game which we've already learned how to define before.

And then we're going to add this ingredient of equivalence classes.

So we're going to add this element I which is a set of equivalent, a set of

sets of equivalent classes, one for every player.

So, for player 1 we have this set of equivalence classes or let's say for

player I, we have a set of equivalence classes numbered from 1 to K sub I.

So, these are all different equivalence classes.

And each one of these classes is some number of different choice nodes,

one or more choice nodes. And these are going to be different

choice nodes that that player isn't able to tell apart.

So, if every one of these equivalence classes contains only one choice node.

We are back with the perfect information case and if any of these equivalence

classes contains more than one thing, then we have something new.

We have a game where the player doesn't quite know what's going on all the time.

Now, in order to make this definition work, we need to add a couple of

restrictions so that it makes sense. So we want to say that if if two

different choice nodes are part of the same equivalence class.

Then first of all, they have to belong to the same player.

They have to be labelled with the same player because if they're not, you'd be

able to tell them apart because they a, different player would be acting.

So the player, kind of when he show up, he would know which, player he was, he

wou, he would really not be confused about all of the nodes in the equivalence

class. And the second restriction we have, is

that the two choice nodes have to have the same set of available actions,

because is the player can't tell them apart, he has to know what to do.

And, and that's the only restrictions. So let's look at an example game here,

and see what the equivalence classes are. So in this game, player 1 has 2 different

equivalence classes. This is 1 equivalence class, and this is

another equivalence class. So, in other words we're going to use a

dotted line here to connect together choice nodes that belong to the same

equivalence class. And what we want to say in this game is

that player 1 moves and if he goes right, then the game is just going to end and

they're just going to get each a payoff of 1.

If he goes left then they're going to get to make, player 2 is going to get to make

a choice. and player 2's going to go either a or b.

And then, player 1 is going to get to move a second time.

But player 1 isn't going to get to observe the move that player 2 took.

And so, he's going to have to take the same action.

regardless of whether he's at this choice node, or this choice node.

And indeed, you can see, we've labeled it the same way.

So if he says he wants to go left, he would have to go left from both of these

notes. And just to complete the answer to my

question, where the equivalence classes for player two, well player two only has

one choice node actually as the table here.

There shouldn't be a two player two has only one choice node, and so he has only

one equivalence class. So how should we define the pure

strategies for each player in this game? What's going to make sense as the

definition of pure strategies? Well, intuitively remember before what we said

was that we had the cross product of all of the different action sets for each

player. We don't want that here because we don't

want for it to be possible for player 1 to do something different in this choice

mode and then in this choice mode. So instead what we're going to use as a

definition is that the pure strategies is for player i are the cross product of the

action sets in every different equivalence class that he has.

And so, player 1's pure strategies here Are going to be a choice here, and

independently a choice here. So player 1 could take the action L, l or

he could take the action R, l or he could take the action L, r or R, r.

So player 1 here has 4 different pure strategies, rather than 8 as he would

have if we didn't have this equivalence class here.

So, in the imperfect information normal form, we have a more powerful

representation than we did in the perfect information case.

And, one way we can see that is we can represent any normal form game in this

representation, which you may recall we couldn't do with perfect information

games. So, here I'm showing you how to represent

the TCP backoff game or in other words the prisoner's dilemma game in, in

perfect-information extensive-form. So how does this work? Well first of all,

player 1 gets to decide whether to cooperate or to defect.

And after that, player 2 gets to decide whether to cooperate or defect.

Now, of course, in the prisoner's dilemma, you don't get to see what the

other person, chose to do when you take your own action.

So it might sound like a problem that player 2 gets to move second.

But, player 2 isn't able to tell which action player 1 took.

And so, although our game representation says that he goes second.

It doesn't really make a difference that he goes second.

because because you are not informed of what player 1 did.

And then once both of them take their actions, we end up with some payoffs.

So if they go C, D then they end up with this payoff here.

And these are the same payoffs that we have in the game matrix.

Notice that we could, have things work the same way if we put player 2.

At the root node and player 1 down here below because time isn't really playing a

role in this game. So I, what I told you on the previous

slide was how to start with a normal form game and make an extensive game out of

it. I can also still do the thing that we

talked about with the perfect-information extensive-form which is to start with an

extensive form game and make a normal form game out of it.

And, the way this works is exactly the way it did before.

So, I take all of the pure strategies for player one and I make them into rows.

I take all of the pure strategies for player two and I make them into columns

and then, this gives me my matrix. And for every cell of the matrix, I say

well, if player 1 took this pure strategy.

And player 2 took this pure strategy, what payoff would result? And I put that

into the cell of the matrix. And I fill in the whole matrix that way.

And once I have, the matrix like that. Then, then I'm kind of done.

My, my definition of mixed strategies is exactly what it was before.

It's just the mixed strategies in the induced normal form.

the definition of best response in Nash equilibrium for imperfect information

extensive form games again just kind of leverage the induced normal form.

And, and so all of those concepts that you already understand from from normal

form games carry over directly to imperfect information games.

And so for example we know from Nash's theorem that a Nash equilibrium always

exists for every imperfect information extensive form game because I can make a

finite normal form game out of it. Now, it's going to be the case that, this

transformation can make the game exponentially bigger as it could before,

even with the perfect information case. But, for example, for existence of

equilibrium that doesn't matter. Now lastly, you might wonder what

happens, if I take both of these transformations that I've shown you and I

apply them together. So for example, I could start with an

imperfect-information extensive-form game, make it into a normal- form game,

and then make that back into an imperfect-information extensive-form game

again. So, you might wonder, do I end up with

the same game? And the answer is, no, I won't.

Because I might have a game tree which is pretty deep.

it could have all kinds of different equivalence classes.

It, it could have all kinds of different sequential moves by the different

players. And when I make it into a normal form

game. I'm going to get this flat table.

And then, when I take a flat table, and turn that into an extensive form game.

It's going to be an extensive form game that only has 2 levels.

With all of this stuff in 1 big equivalence class.

And so, it's going to be an extensive form game that looks different from the

one that I started with. But, what is important is that it's

going to have the same strategy spaces, the same sets of pure strategies for both

agents. And it's going to have the same set of

Nash equilibria. So, although these games might look

different from the perspective of how they explicitly talk about time, they're

going to be strategically equivalent games.

And that's it for this video.