0:00

Hi. In this lecture I want to continue our discussion of the prisoner's dilemma, and

Â I want to focus on how we get cooperation in the prisoner's dilemma. And I'm gonna

Â highlight seven ways that scholars have identified that cooperation can emerge in

Â a prisoner's dilemma even though it's in no one's individual interest to cooperate

Â necessarily. Now remember in the business dilemma it looks as follows: got two

Â players, player one, player two and each one has two actions; they can cooperate or

Â they can defect. It's in our collective interest, to have us both cooperate. We

Â both get a payoff of four. But individually because six is bigger than

Â four and two is bigger than zero, it's always in each player's interest to

Â defect. But if we both defect, we both get payouts of two, which is worse than if we

Â both cooperate. So individual interest don't line up with collective interest, So

Â if the individual incentives point us towards defecting, collectively, we'd like

Â to cooperate. So how do we get that cooperation? So to analyze that, I want to

Â move to a somewhat simpler model. I'm going to assume that what happens is now

Â is each person just has an action they can take. And if they take that action it has

Â some costs and some benefits. So I assume if I cooperate that has a cost to me of C.

Â But, it has a benefit to the person I'm, get, playing with of B. And I'm further

Â going to assume that their benefit is larger than my cost. So, therefore,

Â socially, we'd like me to cooperate because the other person's benefit is

Â larger than my cost. Individually, I'd like to not cooperate because my cost is

Â positive. So this captures, the [inaudible], the essence, really, of a

Â prisoner's dilemma, right? Individually, I'd rather not cooperate. Socially,

Â everyone would prefer if I do cooperate. So in this simpler setting, I wanna talk

Â about the different ways in which we can get cooperation. I want to start by

Â talking about some work by Martin Nowak. Now Martin has this wonderful book called

Â Super Cooperators, where he goes in much more deta ils about these different

Â mechanisms. So the language I'm gonna use comes from his book. The first way in

Â which we can get cooperation is something like a prisoner's dilemma is through

Â repetition. Now Nowak refers to this as direct reciprocity. What does he mean by

Â that? What he means is, we're gonna play this game many times. So, if we're gonna

Â play this game many times, I can recognize maybe it's in my interest to cooperate

Â now, because if we meet next time, we'll cooperate in the future. So my colleague,

Â Bob Axle rod, has a very simple strategy that can, that can induce cooperation and

Â produce [inaudible] called tit for tat. We both start out cooperating, and as long as

Â the other person keeps cooperating, we cooperate. If that person ever defects,

Â then I defect. And this very simple strategy can keep us both cooperating

Â provided we meet often enough. And that's the essence of direct reciprocity. Let's

Â see why. So let's let p be the probability that we meet again. And let's let zero be

Â our path if we deviate. And then, what's my payoff if I cooperate? Well, if I

Â cooperate, it's gonna cost me C, however, if I'm gonna meet you later, there's some

Â probability of meeting you later. And when we meet later, you can cooperate with me,

Â then later I'll get a payoff of B. So, my payoff isn't just minus C whom defecting,

Â it's minus C plus P times the benefit if we meet again. So, if that ends up being

Â positive, what that means is that I should probably cooperate. >> And we can rewrite

Â that as the property of meaning is bigger than C over B, than cooperation should

Â emerge in the prisoner's dilemma. Let me give an example from my life that sort of

Â explains how this works. I used to live in Los Angeles which is a huge city Pasadena,

Â actually, And when we moved, my wife and I moved to Iowa City. And one of the first

Â days I was in Iowa I was in the grocery store and I was just buying a couple of

Â items. The woman in front of me in the grocery store had a cart full of food, and

Â she said to me, why don't you go ahead? >> Now I w as shocked because no one in L.A.

Â Ever let me jump ahead of them in line [laugh] at the grocery store no matter had

Â much stuff they had in their carts. But the reason why she did this is not because

Â people in Iowa are intrinsically nicer than the people in L. A., it's because of

Â the fact that she knew she was likely to meet me again because Iowa City was a

Â small town. So let's see how that works. It's just direct reciprocity. So let's

Â suppose that the benefit to me of getting the jump in front of her was ten because

Â she had this cart full food. Let's suppose the cost to her is only two because I only

Â had a few items. So this ratio of cost to benefits is one over five. Well now the

Â question is what's her likelihood of meeting me again? Well in a place like Los

Â Angeles, if we're at the same grocery store, it could be maybe one in 1000. It's

Â not very big. But in a town like Iowa City, there might be a 50 percent chance

Â she's gonna see me again. [laugh] It's not a very big town. So, given that she's

Â likely to see me again, in Iowa City, she's gonna cooperate. In L. A., she's not

Â gonna cooperate. She has a greater likelihood of direct reciprocity, and

Â direct reciprocity leads to cooperation. What's another method? Reputation. Now

Â Nowak calls this indirect reciprocity. Cuz [inaudible] reputation is as follows.

Â Instead of us directly meeting again, maybe I know the, and I get to know the

Â woman who met, was in front of me in the grocery store, and I tell other people how

Â nice she is. So she gets a reputation. So now we can do is we can say, let's let q,

Â instead of it being the probability that we meet again, let it be the probability

Â that her reputation gets out, And so what's going to happen is, now the cost of

Â her to letting me go ahead is C. Her benefit is Q, the probability that her

Â reputation gets known, times B, because if her reputation is known that she's a nice

Â person, other people cooperate with her because they know she's going to cooperate

Â with them or possibly someone else. So you're creating this sort of v irtuous

Â cycle of people cooperating with one another. So again we get this same sort of

Â inequality. As long as the value for reputation being known is bigger than that

Â ratio C over B, we're going to get cooperation. So notice this subtle

Â difference. In direct reciprocity I'm cooperating because I'm going to meet you

Â again, and I think I'm going to get a payoff from you. In indirect reciprocity

Â I'm hoping to get a reputation, a good reputation. I'm hoping that person will

Â spread far and wide how cooperative I am. And then when somebody else meets me,

Â they'll say, oh there's Scott. He's cooperative. I'll cooperate with him

Â because he's such a nice person. And through indirect relationships we can

Â induce cooperation. Here's a third, network reciprocity. So let's suppose that

Â we got a set of cells in a network and we want to ask little cells cooperate with

Â one another. Is it in their interest to cooperate with one another? What I'm gonna

Â do is I'm gonna consider a very regular graph, number of certain networks with

Â these different types of networks. A regular graph is [inaudible]. Everybody

Â has the same number of neighbors. What we're gonna see is each person has K

Â neighbors. If K is less than B over C, that's same ratio again, then what we're

Â likely to get Is cooperation. Let's see why In this setting, I'm going to make a

Â different assumption about behavior. What I'm going to assume that people are

Â networks and they decide what behavior to follow based on how successful their

Â neighbors are. So let's think of a simple network where the benefit of having

Â someone cooperate with you is five, the cost of cooperating is two. And again,

Â here, each person's connected to two people. So red is gonna denote defectors,

Â and green is gonna denote cooperators. [inaudible] this long line of people. If

Â you're a defector right here surrounded by two defectors, your path is gonna be zero.

Â Cuz you're neither cooperating, nor is anybody who's playing against you

Â cooperating, so your payoff is just a flat rate of zero. If you're over he re, and

Â you're a cooperator, and you're playing with cooperators, your path is gonna be

Â six. Why is that? You're gonna get plus five. >> From each of the two people

Â playing with you, but you're playing two for corroborating each of those, so that

Â gives you a payoff of six. So now we have to think about this person sitting here in

Â the center. It's on the edge between defectors and cooperators. What are they

Â gonna do? Well they're gonna look at the defector to their left, they're gonna say

Â this person isn't cooperating with anybody, so they're not, it's not costing

Â them anything, I'm cooperating with them, so they're gonna get a payoff of five.

Â This person to my right is getting a payoff of six, as we talked about before.

Â So when you look at this person in the center. They're deciding that if I defect,

Â I might only get five, but if I cooperate, that person is getting six. So, their

Â impressions are going to be that cooperating is better than defecting, so

Â they're going to cooperate. Let's change the path. Let's suppose that the cost of

Â cooperating. >> Is three. Well now, the payoffs to the defectors is going to be

Â unchanged, Because they're not cooperating with anybody. But the payoffs to the

Â cooperators are going to fall, by 2Y. Before they were six, but now they're

Â going to be four. Let's see why that's true. Remember their benefiting, they're

Â cooperating with two people, and two people are cooperating with them. So that

Â means they're getting two benefits of five. But they're getting two costs of

Â three, and we add that up to get four. Well now when this person looks to the

Â right and their left they're going to see that well, the defectors look like they're

Â doing better, and so this person is going to switch, and defect. Now, we can do more

Â elaborate models. Let's suppose we now have K=4. So each person is playing

Â against four people. So this person is playing against four people, as follows.

Â Again, if this person is defecting, and all their friends are defecting, their

Â path is gonna be zero, And if we look at som eone over here. >> Who's cooperating

Â and let's suppose all four friends are cooperating, we can figure out their

Â payoff as well, and they're going to get four times five, and they're going to get

Â four times -two. So that's twenty. That's -eight. So the payoff is twelve. So if I

Â look at this person here, their payoff is seven, but if they look at the cooperators

Â they know, they say wow these people are getting twelve. If they look at the one

Â defector they know they see this person's getting fifteen, and they're going to

Â defect. So it's interesting here, is the benefits are five, the costs are two, and

Â this person wants to defect. Whereas previously, we had the benefits of five

Â cost her, too, but we were less connected. We only had two neighbors. You want it to

Â cooperate. You see, as you get more connected, you have incentives to defect.

Â Let's see why. We get this K less than B over C. And the way to think about that is

Â you want to think about this one defector who's sort of sticking out. Who's in the

Â midst of all these cooperators? So as in general you get K neighbors. And let's

Â suppose you're surrounded by cooperators. What's your payoff going to be? It's going

Â to be K times the quantity B minus C. Because everybody cooperates with you, so

Â you're getting K times B. But you're cooperating with everybody else, so you're

Â losing K times C. Let's suppose you're a boundary defector. [laugh]. So, somebody

Â who's defecting, who's surrounded by cooperators, Well, then, your payoff is

Â gonna be K minus one, all your other neighbors that are cooperating with you,

Â times B. Now, let's look at when this is gonna be true. When is K times B minus C

Â gonna be bigger than K minus one times B? Well, if we just do the math, we're gonna

Â get B over C is gotta be bigger than K. And so, what we get is we get that, this

Â is the inequality we get. That [inaudible] gotta be less than B over S. So again,

Â very simple mathematics explains what's gonna be necessary for cooperation. Now,

Â [inaudible], when you think about reputation, A really dense network, cause

Â their reputation's more likely to spread. When we think about this network

Â reciprocity story, we'd like to have a less dense network because there's less of

Â an incentive to defect. So whether you want a rich network with lots of

Â connections with high degree and high [inaudible] coefficient, or whether you'd

Â like a sparse network, depends on the mechanism you're using to get cooperation.

Â If you're relying on reputation, you want lots of clusters, lots of connections. If

Â you're relying on network reciprocity, you'd prefer it to be starker. Next, Group

Â selection, Group selection refers to the fact that it could be that the solution is

Â on groups of people as opposed to individuals. And so groups of cooperators

Â can win out. Here's the idea. Suppose you've got two groups of people. There's a

Â red group here and a blue group here, And each group has to, within itself has some

Â percentage of cooperators. So let's suppose the red group has 80 Percent

Â Cooperators, And let's suppose the blue group has only 50 percent cooperators. Now

Â let's suppose that the red group and the blue group go to war. Well who?s likely to

Â win? The red group is likely to win because they've got more cooperators and

Â they've got more cooperators and over time they've benefited more. They probably have

Â more food. They probably have better technology. They probably have all sorts

Â of stuff. So when you think of going to war, groups of cooperatives are likely to

Â beat groups of defectors. So what's gonna happen is, even though it's the case that

Â within the group. Defectors will do better when those groups go to war against each

Â other, groups that have more cooperators are likely to win, And so what you get, is

Â that by selection at the group level, if there's competition between these groups.

Â As long as it's frequent enough. Then you could actually get a force towards

Â cooperation. Last, you have kin selection. In kin selection, the idea is this. Is

Â that different members of a species have different amounts of relatedness. And so

Â if someone is my brother, or my offspring or second cousin, I may actually care

Â about their benefit. So what we formally do is we have some measure of relatedness

Â R. So for a child, that relatedness would be a half genetically. So if I could do

Â something that benefits my child ten, and only costs me two, Maybe it's not the case

Â but ten is bigger than two, Purely selfless, that'd be the case. But if I

Â just take into congenetic relatedness, I, then I just think, is five bigger than

Â two? This particular model has been used a lot in ecology, because you have some

Â species, like ants and bees, where R is really, really high, and it's not

Â surprising that within those species, you see lots of cooperation. Okay. Those are

Â the [inaudible] five general ways; let's talk about two ways we can get cooperation

Â in human societies. The first one is, laws and prohibitions, you can just make things

Â illegal. So for example, it might be my interest to talk on the cell phone when

Â I'm driving, but it's not in society's interest, because it increases the

Â probability that somebody else is going to get injured. So we pass laws saying, it's

Â not legal to talk on your cell phone when you're driving. Another thing we can do is

Â create incentives. So, when I lived in Madison, Wisconsin, often times it would

Â show a lot, and there was a law saying that 24 hours after the snow had stopped,

Â you had to have your sidewalk shoveled. Now, the cost of shoveling my sidewalk was

Â high for me, but other people are gonna benefit, And, the way that they, and they

Â would benefit more. Then it cost me to shovel. In order to induce me to do it,

Â the city basically said; if you don't shovel it you're gonna have to pay huge

Â cost. You're gonna have to pay $100. And that fear of paying the $100 made me

Â shovel my walk. So simple incentives, it wasn't illegal, I didn't have to shovel my

Â walk I was just gonna have to pay a fine if I didn't. >> Okay. So we've seen a

Â whole bunch of ways in which we can get cooperation [inaudible] dilemma. It can be

Â repeated, direct reciprocity; it c ould be reputation, indirect reciprocity. >> Yeah.

Â >> It can be a network effect. It can be group selection, where groups fight

Â against each other, and so the groups that cooperate are likely to win. There can be

Â kin selection, where what happens is that I cooperate with people who are like me.

Â And then finally, we can have things like laws prohibit, just prohibiting things

Â that aren't good, and we can have incentive structures, where we pay people

Â to cooperate when they'd naturally be willing to defect. Okay, so that's the

Â prisoner's dilemma, and how we can solve it. But that's a simple two by two

Â interaction. Where we wanna go next is we wanna talk about larger prisoner's

Â dilemmas, where there's lots of players. These are sometimes called collective

Â action problems. Okay? Let's move on... Thank you.

Â