0:00

We're now going to look at a different category of games, games that are called

Â Bayesian games. Sometimes called games of incomplete information, not to be confused

Â with games of imperfect information. So far what we have seen are games in which

Â all agents know what the basic setting is. That is, they know who the players are.

Â They know the actions available to the players. They know that payoffs associated

Â with each strategy profile or each action profile, depending on what everybody does.

Â this is true in all games, including games of imperfect information. That is games

Â of, in which our information is such where agents don't know. Exactly in which state

Â they are, nonetheless they know what would happen given what the strategy of all the

Â agents. So we're going to relax that. We're going to assume that What you said

Â isn't necessarily always common knowledge. Now in principle you can imagine relaxing

Â the various assumptions. You don't know the number of the players. You don't know

Â maybe the action, how many actions are available to them. >> But, in some sense,

Â some informal sense, all of those forms of uncertainties can be reduced to one type

Â of uncertainty, that is about the payoffs in the game. And, so we will assume that

Â agents have perfect common knowledge of everything, except what the payoffs of the

Â game are. And, furthermore, that there is some. Prior knowledge, prior belief that

Â is common to all the agents about those payoffs, and simply agents have different

Â signals that lead to different posteriors based on those common prior This may sound

Â very vague. Let me make it precise. Let me first give the formal definition. and then

Â just give an example which will make everything clear. So we have a set of

Â games, that is a Bayesian game is defined by first of all a set of games that are

Â identical except in their payoffs. So let's start going over the formal

Â definitions again. So we have A tuple that defined the game. You have a set of agents

Â N, and we have G as a set of regul ar kind of games. Think of these as normal for

Â games for example. each game is a consists of N A agent play the game, and they all

Â have the same strategy space. That is, they're 2 games in the set that have the

Â same strategy space. As I said, the payoff will be generally different. We have a

Â prior that is a distribution over those set of games. That's a sum prior. Nature

Â will decide which game is actually played based on this prior. And then there's

Â private signals as defined by our partition structure. that is each agent

Â for each agent, we find some equivalent relation on the games. And agent will be,

Â sort of, told which in which equivalent's class they are. And based on that, they'll

Â need to play the game. Now this is a mouth, mouthful, I know but hopefully the

Â following example will make it clear. Let's assume that we have 4 possible

Â games. And here the games that are familiar, we have Matching Pennies, we

Â have Prisoners Dilemma, we have the game of pure Coordination, and we have Bat,

Â Battle of teh Sexes. Each of those defined simply by their, by their payoffs. Now,

Â nature is going to decide which of those games actually is being played. And we've

Â decided, based on the probabilities as listed here. We have a probability of 0.3

Â here, 0.1 here, 0.2 here, and Point 4 here. Now once nature makes its choice,

Â agent will play. But the question is, what will they know? They will know the prior,

Â but they will know something in addition. And what they will know will be defined by

Â this partition. So here we have the 2 agents playing, and for each of these

Â agents there is a, a, an equivalent to find.

Â So, for example, think about the role player. For the role player there are 2

Â equivalence classes, denoted by the bold Partition.

Â And I'll make it green now. This is the equivalent relation defined for the row

Â agent. So, for example, suppose that, nature decided to, in fact. Playing

Â matching pennies. The agents will know the, that is the row agent, will know that

Â he's either in this game, or in this game. He'll know that he's not in any of the

Â other games. So that will be his private signal. He'll now have posterior belief.

Â What will he believe? Well he will believe that with probability point a, point 75 he

Â is playing this game at 0.25 he is playing this game and why is that because this is

Â the ratio of 3 to 1 as defined between these 2 games for him. What will the

Â column player Now well a column player lets pick a different color for her, she

Â has a different equal ventilation, this one. And now if matter again chose

Â matching penny what will she know? Well she'll know that she is either. In this

Â game or in this game, and in this case she will need to update her prior to reflect

Â this information and the perceiver for the a,h column agent will be that she is

Â playing this game's probability .6 and this Proba, probability 0.4. Again,

Â maintaining the ratios between these 2 games. And then we'll know more,

Â intuitively. Because the, when the agent knows, The ag, for example, the row agent

Â knows that she's someplace in this class. She will not know exactly what information

Â the common player has, but she knows what the possible information is it might have.

Â She knows that, that the role player knows that Either she is in this game, in which

Â case she knows that this would be the information that the common player has or

Â that she is in this game in which case she, the role player knows that the role

Â player knows that she's someplace here. And so it's a complicated story because

Â you can keep going. They have some beliefs about what the other player believe about

Â what they know so on and so forth. But this is the structure of Bayesian games

Â and based on this, you can start modeling and what it will do. But since this is

Â complicated there's an alternative Perspective on, on, on beige in game that

Â is different, but in some sense easier to work with.

Â