0:00

Okay folks, so we're into the last part of the course

Â now and we'll be talking about games on networks.

Â And, in particular, we're still interested in understanding networks and behavior and,

Â now, trying to bring strategic interaction into play,

Â where people's decisions depend on what other people are doing.

Â So the idea is that, essentially,

Â there's decisions to be made and it's not just a simple diffusion or contagion process.

Â It's not updating beliefs.

Â It's that people care about what other individuals are doing.

Â So there's complementarities.

Â I want to only buy a certain program if other people are using that same program.

Â So the way in which I write articles depends on what my co-authors are doing,

Â or I want to learn a certain language only

Â if other people are also speaking that language.

Â So there's going to be inter-dependencies between what individuals do.

Â And there could also be situations where I can free ride.

Â So if somebody else buys a new book,

Â I can borrow it from them and maybe then I don't buy it myself.

Â So who I know that's actually bought a book,

Â maybe that affects whether I buy the book,

Â both positively and negatively.

Â So there's strategic inter-dependencies.

Â And you know, the idea of games,

Â people think of games - you know,

Â we're not talking about Monopoly or chess, checkers etc.

Â We're thinking about a situation where there's interactions.

Â And what a given individual is going to do depends on what other individuals are doing,

Â so there is some game aspect to it in that sense.

Â But we're using game theory as a tool to try and

Â understand exactly how behavior relates to network structure.

Â Okay.

Â So what we're going to do is work with some basic definitions.

Â And I won't presume that you're so familiar with game theory beforehand.

Â And we'll work through the basic definitions,

Â which will be pretty self-contained in terms of the network setting,

Â then work through some examples and then afterwards we'll begin to

Â do a more formal analysis and more extensive analysis of how these things work.

Â Okay.

Â So the idea here is there's going to be different individuals,

Â they're on a network,

Â they're each making decisions and you care about the actions of your neighbors.

Â So the early literature on this came out of the computer science literature.

Â And what I was really interested in was how complex is the computation of

Â equilibrium in these settings in worst case games;

Â so how hard would it be for a computer to actually

Â find an equilibrium of one of these games in situations

Â where there might be a lot of - in

Â a case where nature was making it as hard as possible for you to find an equilibrium.

Â And what we're going to focus in on is sort of a second branch of this literature,

Â which instead of being interested in the worst case computational issues,

Â is instead interested in applying games on networks to actually

Â understand what networks have to say about how networks influence human behavior.

Â And the one thing that's sort of nice is a lot of

Â the interactions that we tend to have between individuals will have more structure.

Â And so the games will be nice ones;

Â they won't be the worst case games

Â that are going to be computational and complex.

Â They're going to be ones where we can actually

Â say something meaningful about the structure.

Â So we're going to start with this as a canonical special case.

Â So it's a very simple version of the game but

Â one that's going to be fairly widely applicable.

Â And so what is true is we're looking at a situation

Â where a person I is going to take an action. Let's let that be xi.

Â And we'll start with the case where it's just a binary action,

Â it's either zero or one.

Â So I either buy this book,

Â or I don't buy the book;

Â or I invest in the new technology,

Â I don't; I learn a language,

Â I don't learn a language;

Â I end up going to a movie, I don't go to a movie.

Â And the payoff is going to depend on how many neighbors choose each action.

Â So how many people choose action zero,

Â how many neighbors choose action one and how many neighbors I have.

Â So that's going to be what - my payoff is going to depend on on those things. Okay.

Â So we've got each person choosing an action in zero,

Â one and we're going to consider a situation where your payoffs depend on your action.

Â So person I's payoff depends on their action.

Â It's also going to depend on the number of individuals,

Â number of neighbors of I that choose one,

Â so how many of my neighbors chose one.

Â And it will depend on my degree,

Â how many neighbors I have.

Â So if I have a hundred neighbors,

Â it might be different than if I have

Â three neighbors and two of them are choosing action one.

Â Two out of three is different than two out of a hundred,

Â so I might care differently depending on how many neighbors I have. Okay?

Â So what are the main simplifying assumptions in this setting?

Â The main simplifying assumptions are that we've got just the zero,

Â one actions, so we either take an action or we don't.

Â I only care about the number of friends taking the action,

Â not the identities of them.

Â So I don't care whether it's - I don't have best friends and less best friends.

Â I treat friends equally in terms of who's taking the action.

Â And it also just depends on my degree,

Â so it - how many friends I have.

Â I don't have a different preference than somebody else.

Â So we can enrich these models later to allow for people to

Â have different preferences and weight things differently.

Â But for now let's think of a world where everybody treats

Â their friends equally and they - it only matters how many friends they have,

Â not who their friends are.

Â Okay. So let's let's look at it as an example of a simple game of complements.

Â I'm willing to choose this new technology if and only if at least t neighbors do.

Â So this is a game I suppose I - you know I'm learning to play bridge, a card game.

Â I have to have at least three friends who play bridge

Â before I'm going to learn to play bridge, right?

Â So my payoff to playing action zero,

Â if I don't learn it I just get a zero.

Â And one example of this would be that I get a payoff from action - playing action

Â one which looks like minus this threshold plus how many friends play it.

Â So if this threshold was three,

Â then I get minus three plus how many of my friends play it.

Â So, for instance, if at least three of my friends play it,

Â then I'm going to get a payoff of zero,

Â if four of my friends play it, I get a payoff of one,

Â if five of my friends play it,

Â I get a payoff of two and so forth.

Â So this would be a very simple example,

Â where I'm going to be willing to choose action one

Â if and only if at least two of my neighbors do.

Â And you can you could write down all kinds of different payoff matrices.

Â This is just one example.

Â And so let's think of of looking at a network now.

Â And we've got a situation where we've got a bunch of different people.

Â And a person is willing to take action one if and only if at least two - t - sorry,

Â two neighbors do, okay?

Â So this is a game where once I have at least two of

Â my friends who take - bought this new technology, I'm willing to do it.

Â Otherwise I don't. Okay.

Â So what do we know first of all? Well, if we look at this network,

Â all these blue people,

Â they're going to take action zero because they only have one friend.

Â Actually, sorry, this person has two friends;

Â that action shouldn't be called as a zero.

Â So these three individuals only have one friend.

Â So they're definitely going to have to take action zero.

Â There's no way they're going to have at least two neighbors do it.

Â But what we can do is we can ask what about this player, right?

Â Well their action is going to depend on what their other friends do, okay?

Â And one possibility is that we set, for instance,

Â these three individuals altered to playing action one.

Â Right. So if these two individuals are doing it,

Â then this person is willing to,

Â they're all willing to because now they each have at least two friends doing it.

Â So one possibility would be to stick it where we were before,

Â where nobody takes the action because nobody else

Â does and so the technology never gets off the ground.

Â So it's possible that just,

Â if it's a technology that needs people to

Â want to communicate with other people and to have other people do it before they do it,

Â there's a possibility of never getting it seated,

Â it never gets off the ground.

Â Another possibility is, yes,

Â these three people all adopt it because they each have two friends who do it.

Â And so that's also an equilibrium, okay?

Â Now if these are the only people adopting,

Â then nobody else actually wants to do it because all the other individuals still have,

Â at most, one friend who did it,

Â so nobody else is above their threshold.

Â And indeed it's still in equilibrium

Â for these three people to do it and nobody else to do it, right?

Â So nobody else wants to take the action

Â because none of the other people have two neighbors who do. Okay?

Â 9:24

So that's one type of game.

Â Let's take a look at a game which is going to have a sort of an opposite feature to this.

Â So this was one which had a feature that if more of my friends take the action,

Â then I'm more likely to want to take the action.

Â So compatible technologies will have that kind of feature.

Â But now let's think of the example where if somebody else,

Â one of my friends buys the book,

Â I don't buy the book because now I can borrow it from them, okay?

Â So I'm willing to buy the book if and only if none of my neighbors do.

Â So, for instance, what if I - if I don't buy the book, what's my payoff?

Â My payoff is one.

Â If some of my neighbors - one of my neighbors buys the book,

Â if the number of neighbors who bought the book is positive,

Â I can borrow it from them.

Â I get a payoff of one.

Â If none of my neighbors bought the book,

Â I can't borrow it, I get a pay off of zero;

Â I didn't buy it.

Â Now, instead, I could buy it myself.

Â And if I end up buying the book myself,

Â then what do I end up with?

Â I end up with a payoff of one minus c,

Â where c is the cost of the book, right?

Â So I'm in a situation where,

Â well, in terms of my payoffs here,

Â my optimal payoff would be I'd love to have one of my friends buy it,

Â me not buy it and borrow it from them.

Â That would give me the payoff of one;

Â that's my best possible payoff.

Â My worst payoff is nobody buys it and I don't buy it.

Â So if nobody - if none of my friends buy it,

Â then I would actually be willing to buy it,

Â and as long as c is less than one.

Â And the situation that wouldn't be in equilibrium is going to

Â be one where none of my friends buy it and I don't buy it.

Â So if they don't buy it I buy it.

Â But I won't buy it if one of my friends does. Okay?

Â So if we look at that example this is

Â known as what's called a best-shot public goods game.

Â So what matters to any individual is sort of

Â the max of the actions of their - in their neighborhood.

Â And so that's called the best-shot public goods game.

Â So an agent is willing to take action one if and only if no neighbors do.

Â So here would be an equilibrium of that game.

Â This person takes action one,

Â none of the neighbors do.

Â This person takes action one because no neighbors do and so forth, right?

Â That's an equilibrium of this game, okay?

Â That's a different game and it's going to

Â have a different shaped equilibrium to what we had before.

Â Here now we have these people taking action one.

Â There's multiple equilibria to this game.

Â There can be different combinations of things that are equilibria

Â and we'll take a look at that in more detail.

Â