0:00

Okay, folks. So, we're into the last part of the course now,

and we'll be talking about games on networks.

In particular, we're still interested in understanding networks and behavior,

and now trying to bring strategic interaction into

play where people's decisions depend on what other people are doing.

So the idea is that,

essentially, there's decisions to be made,

and it's not just a simple diffusion or contagion process,

it's not updating beliefs,

it's that people care about

what other individuals are doing, so there's complementarities.

I want to only buy a certain program if other people are using that same program.

So the way in which I write articles depends on what my co-authors are doing.

Or I want to learn a certain language if only

if other people are also speaking that language.

So there's going to be inter-dependencies between what individuals do.

There could also be situations where I can free ride.

So, if somebody else buys a new book,

I can borrow it from them,

and maybe then I don't buy it myself.

So, who I know that's actually bought a book maybe that affects whether I buy the book,

both positively and negatively.

So, there's strategic inter-dependencies,

and the idea of games,

people think of games,

we're not talking about Monopoly or chess,

checkers, et cetera, we're thinking about a situation where

there's interactions and what a given individual

is going to do depends on what other individuals are doing.

So there is some game aspect to it in that sense.

But we are using game theory as a tool to try and

understand exactly how behavior relates to network structure.

So, what we're going to do is work with some basic definitions,

and I won't presume that you're so familiar with game theory beforehand,

and we'll work through the basic definitions,

which will be pretty self-contained and in terms of the network setting.

Then, work through some examples, and then afterwards,

we'll begin to do a more formal analysis

and a more extensive analysis of how these things work.

So, the idea here is there's going to be different individuals,

they're on a network,

they're each making decisions,

and you care about the actions of your neighbors.

So the early literature on this,

came out of the computer science literature,

and what I was really interested in was how complex is the computation of

equilibrium in these settings in worse case games?

So, how hard would it be for a computer to actually

find an equilibrium of one of these games in situations

where there might be a lot of- in

the case where nature was making it as hard as possible for you to find an equilibrium?

What we're going to focusing on is a second branch of this literature.

Which instead of being interested in the worse case computational issues,

is instead interested in applying games on networks to actually

understand what networks have to say about how networks influence human behavior.

3:06

One thing that's nice is a lot of the interactions that we

tend to have between individuals will have more structure,

and so the games will be nice ones,

they won't be the worst case games that

are going to be computational complex,

they're going to be ones where we can

actually say something meaningful about this structure.

So, what we're going to start with is as a canonical special case.

So, it's a very simple version of the game but

one that is going to be fairly widely applicable.

So, what is true,

is we're looking at a situation where person I is going to take an action.

Let's let that be Xi,

and we'll start with the case where it's just a binary action,

is either zero or one,

so I either buy this book or I don't buy the book.

Or I invest in a new technology, I don't.

I learn a language, I don't learn a language.

I end up going to a movie, I don't go to movie.

The payoff is going to depend on how many neighbors choose each action.

So, how many people choose action zero?

How many neighbors choose action one?

How many neighbors I have?

So, that's going to be my payoff is going to depend on those things.

So, we've got each person choosing an action in zero, one.

We're going to consider a situation where your payoffs depend on your action.

So, person i's payoff depends on their action,

it's also going to depend on the number of individuals,

number of neighbors of i that choose one.

So, how many of my neighbors chose one?

It'll depend on my degree,

how many neighbors I have.

So, if I have 100 neighbors,

it might be different than if I have three neighbors,

and two of them are choosing action one.

Two out of three is different than two out of 100,

so I might care differently depending on how many neighbors I have.

So, what are the main simplifying assumptions in this setting, means,

simplifying assumptions are that we've got just the zero,

one action, so we either taken action or we don't.

I only care about the number of friends taking the action,

not the identities of them.

So I don't care whether it's I don't have best friends and less best friends.

I treat friends equally in terms of who's taking the action.

It also just depends on my degree.

So, how many friends I have,

I don't have a different preference than somebody else.

So we can enrich these models later to allow for people to

have different preferences and weight things differently.

But for now, let's think of a world where everybody treats

their friends equally and it only matters how many friends they have,

not who their friends are.

So, let's look at as an example of a simple game of complements.

I'm willing to choose this new technology if and only if at least two t neighbors do.

So, this is a game I suppose I'm learning to play bridge,

a card game, I have to have

at least three friends who play bridge before I'm going to want to learn play bridge.

So, my payoff to playing action zero,

if I don't learn it, I just get a zero.

One example of this would be that I get a payoff from playing action one,

which looks like minus this threshold plus how many friends play it.

So, if this threshold was three,

then I get minus three plus how many of my friends play it.

So, for instance, if at least three of my friends play it,

and I'm going to get a payoff of zero.

If four of my friends play it I've got a payoff of one.

If five of my friends play it again,

I have a payoff of two and so forth.

So, this would be a very simple example,

where I'm going to be willing to choose action

one if and only if at least three of my neighbors do.

But you could write down all kinds of different payoff matrices,

this is just one example.

So, let's think of looking at a network now,

and we've got a situation where we've got a bunch of different people,

and a person's willing to take action one if and only if at least two neighbors do.

So, this is a game where once I have

at least two of my friends who bought this new technology,

I'm willing to do it, otherwise I don't.

So, what do we know first of all?

Well, if we look at this network,

all these blue people,

they're going to take action zero because they only have one friend.

Actually sorry, this person has two friends that one shouldn't be coded as a zero.

So, these three individuals only have one friend.

So, they're definitely going to have to take action zero,

there's no way they're going to have at least two neighbors do it.

But we can do is,

we can ask what about this player?

Well, their actions going to depend on what their other friends do.

One possibility is that we set for instance,

these three individuals altered to playing action one.

So, if these two individuals are doing it,

then this person's willing too,

they're all willing to because now they each have at least two friends doing it.

So, one possibility would be to stick it where we were before,

where nobody takes the action because nobody else

does and so the technology never gets off the ground.

So, it's possible that if it's a technology that needs people to

want to communicate with other people and to have other people do it before they do it,

there's a possibility of never getting it's seeded,

it never gets off the ground.

Another possibility is yes,

these three people all adopt it because they each have two friends who do it,

and so, that's also an equilibrium.

Now, if these are the only people adopting,

then nobody else actually wants to do it because

all the other individuals still have at most one friend who did it,

so nobody else is above their threshold,

and indeed it's still an equilibrium

for these three people to do it and nobody else to do it.

So, nobody else wants to take the action

because none of the other people have two neighbors to do.

9:25

So, that's one type of game.

Let's take a look at a game which is going to have an opposite feature to this.

So, this was one which had a feature that if more of my friends take the action,

then I'm more likely to want to take the action.

So, compatible technologies will have that feature.

But none of us think of the example

where if somebody else one of my friends buys the book,

I don't buy the book because now I can borrow it from them.

So, I'm willing to buy the book if and only if none of my neighbors do.

So, for instance, if I don't buy the book, what's my payoff?

My payoff is one,

if one of my neighbors buys the book.

If the number of neighbors who bought the book is positive,

I can borrow it from them,

I get a payoff of one.

If none of my neighbors bought the book,

I can't borrow it,

I get a payoff of zero, I didn't buy it.

Now, instead, I could buy it myself.

If I end up buying the book myself,

then what do I end up with?

I end up with a payoff of one minus c,

where c is the cost of the book.

So, I'm in a situation where,

well, in terms of my payoffs here,

my optimal payoff would be,

I'd loved to have one of my friends buy it,

me not buy it and borrow it from them.

That would give me the payoff of one,

that's my best possible payoff.

My worst payoff is nobody buys it and I don't buy it.

So if none of my friends buy it,

then I would actually be willing to buy it,

and as long as c is less than one.