0:00

Hello folks. And we're back again.

Â And I want to talk a little bit more about dynamics and network formation.

Â And in particular adding a little bit of noise to the dynamic process, and that

Â actually has some nice properties to it. So, let's have a peek so we're going to

Â talk about stochastic evolution of networks.

Â And we'll start with a concept which we called improving path.

Â And this comes out of, of work with I did with Alyson Watts, so it's Jackson and

Â Watts 2002. And the idea here is we've got this

Â sequence of adjacent networks. And just as, as in the process we talked

Â about just before. you could, you add a link if it benefits

Â both. at least one strictly.

Â Delete a link if either person benefits from its deletion.

Â But what we're going to do now in terms of, of looking at these paths is we're

Â going to noise them up a bit. So first let's just make sure that we've

Â got the concept of improving paths down. And the idea here is, we can look at say

Â a simple example. everybody disconnected has a value of

Â zero. So now what we're going to do is we're

Â going to have a diagram where we've got a bunch of different possible networks

Â there. And we're just going to keep track of

Â what the improving deviations would be. So from here you can think of, of adding

Â links. adding one link between any of the two

Â players. Then you can think of adding, you know,

Â another link so you got two links or going to the complete network.

Â And the payoffs here are telling us that players either get zero from the empty

Â network. a single link in isolation is not

Â worthwhile. You only get a value of minus one.

Â Once you form two links, then it's valuable and three links is the most

Â valuable. And what the arrows do here is represent

Â the changes that would be value improving for people.

Â So, if you were in a situation where you had one link, then you could benefit by

Â either severing that link and going to zero.

Â Or if the possibility of adding a new link arose, then you would want to add

Â either of the links that weren't there that would also be improving.

Â So, there'd be three different ways to get out of this.

Â They would all be improving. So, that's the notion of improvements.

Â And the arrows point here in the directions of improvements.

Â So the idea of improving paths is going to be that there are paths here in this

Â setting where everybody does better along each step.

Â So that improving path from this network to the complete network would be one that

Â passed through adding one link and then adding a second link.

Â So if we follow these arrows, we've got a series of path where each choice is an

Â improvement. And there's also improving paths leading

Â in this direction. So for many of these networks, you can

Â have an improving path leading down to the empty network.

Â So, in this situation there's actually two set of Pairwise Stable Networks.

Â And in fact, they're both Pairwise Nash Stable Networks.

Â So we've got the end points here, the empty and full and networks being stable.

Â And there's improving paths leading from different networks to one or the other of

Â these depending on which situation you're in.

Â Okay, so that's the concept of improving paths.

Â And what we're going to do now is, So we've got two Pairwise Nash Stable

Â Networks, these end points. Right?

Â Are both Pairwise Nash Stable and there's a question of, well, is one of them more

Â realistic as an outcome than the other? And in, in fact they're both strict

Â equilibriums, so if you were here, nobody would, would want to add a link.

Â It, it's strictly costly to them to do so.

Â and here obviously if you, anybody deleting a link is strictly costly.

Â So, just looking directly at these concepts and actually even adding

Â strictness isn't going to give us predictability.

Â so what we're going to do is add some trembles and errors to these improving

Â paths. So, we'll start at some network and with

Â equal probability look at any one of its links just like we did before.

Â We'll add that link if it's not present and both people want to add it.

Â We'll delete it if it's not there and, er, if it is there, and somebody benefits

Â from deleting it. but the key thing is we're going to

Â reverse the above decision with some probability epsilon.

Â So it'll be a probability epsilon greater than zero, that for whatever reason,

Â somebody makes a mistake. And when they want to add a link, they

Â don't or when they want to delete a link, they don't.

Â or a link that's beneficial gets deleted or a, a link that's not beneficial gets

Â added. Right?

Â So, so we're in a situation where some link is recognized.

Â People want to do one thing with it. Either keep it or get rid of it, or add,

Â it and the decision is reversed. Okay.

Â So whatever decision is made, it's reversed with some probability epsilon.

Â And we'll think of this as a small probability.

Â So there's some chance that people make errors but most of the time they'll make

Â the improving choice. Okay?

Â So, what this does is it adds these trembles and errors to improving paths.

Â and what this produces is now, this process is going to, as, as long as

Â there's errors, there's always a chance that you're going to keep moving around,

Â right? So, you're never going to stay anywhere

Â forever. And, you can get from any network to any

Â other network through a series of errors and improvements.

Â And so, this is a finite state. There's a finite set of possible networks

Â you could be in. it's irreducible, in the sense that from

Â anyone you can get to any other one, and it's aperiodic.

Â You could move in odd cycles, even cycles.

Â So this is a very nicely behaved Markov chain.

Â Ones that we know a lot about in terms of theory.

Â It's going to have a nice distribution over how much time you'll spend at each.

Â So, pro, given epsilon and we'll have an idea of how much time you would spend at

Â each network in this process over time. There's a steady state distribution which

Â gives us the long-run distribution of how much time you'll spend at each one of

Â these networks. So, lets have a look in more detail at

Â this process. So, what we've got, when we've noised up

Â our improvement paths, let's suppose we're here.

Â We're at the empty network. Well, now with probability one third, any

Â one of these links could be recognized. So, what's going to happen is, if we

Â recognize a link, they don't, it doesn't want to be added but with a probability

Â epsilon it would be added. So, there's a probability of one third

Â epsilon that you would end up recognizing this link and adding it by accident.

Â There's a probability one third epsilon that you would move here.

Â And a probability one third epsilon that you would move in this direction.

Â So, from a network with zero links there is a probability of epsilon that you go

Â from zero links to one link. And there's a probability of one minus

Â epsilon that you'd just stay there. Okay?

Â And, and so forth. If we're here with two, two, with three

Â links. Then there's a probability epsilon that

Â you go from three to two and the probability of one minus epsilon that you

Â go from three back to three. And so forth, okay.

Â And so from, from one of these networks there's a one third times one minus

Â epsilon chance that you go to the complete network.

Â There's a one third chance times epsilon you go and delete a link that you wanted.

Â one third chance times epsilon here and so forth.

Â So, for each one of these things we can figure out what's the probability that

Â you move from a network with zero links to one with one.

Â From one with one, to two, and so forth. So instead of keeping track of all of the

Â networks, let's just keep track of them in terms of how many links they have.

Â So zero links, one link, two links, three links.

Â And what's the probability that we move between those?

Â Okay. Well, you can, you can keep track of

Â that. What's the probability that you move

Â from, from any one to any other one. And we can go through, you know, the

Â probability that you went to one with zero to one with zero, with one, with

Â two, with three. So you can't move directly to two links.

Â if you have one link. there's a third of a chance any one of

Â the ones you recognize you could go back to to zero.

Â There's an epsilon chance you stay where you are.

Â there's a two third chance that you recognize one of the links that you

Â want to add and go to a two link network and so forth.

Â Alright, so we got a chance this matrix keeps track of what's the probability

Â that if you currently have zero, one, two, or three links that you transition

Â to a different state in the, in the next period.

Â 9:06

That's a well defined Markov chain. And, given this, you can figure out

Â what's the probability that you're going to stay in every state.

Â So there's a solution of how much time you spend in state zero, how much time

Â will you spend in state one and state two and state three.

Â So what's the average frequency of the number of periods you're going to spend

Â in each one of these things? That's a well defined Markov chain

Â problem. You look for the steady state

Â distribution. In this case it happens to be the left

Â hand side unit eigenvector of this matrix.

Â If you don't remember things on Markov chains or if you haven't seen Markov

Â chains before you can find detailed things on on the internet.

Â Also, I have some pages in my book that just goes through the basics of Markov

Â chains. There is lots of sources where you can

Â find information about this. the important thing is that once we've

Â got this process, it's a well defined dynamic process.

Â And we can figure out how much time you are going to spend in each different

Â state. In this case when you look at the, the

Â probability, that, the fraction of time you're going to spend in zero link

Â network, in a one link network, in a two link network and in a three link network,

Â this is the answer to that unique steady state distribution.

Â And you can find this by just checking what the left hand side unit eigenvector

Â of that matrix was. So what's the interesting thing about

Â this? Well the interesting thing about this.

Â Well, the interesting thing about this was we had two Pairwise Stable Nash

Â Networks. The empty network and the complete

Â network. And if you look at the amount of time

Â you're spending in each one of these things, as epsilon gets very, very small,

Â right, as epsilon goes to zero, this approaches one.

Â 10:53

This term ex, approaches zero. This term approaches zero and this term

Â approaches zero. So, this process is going to spend more

Â and more of its time in this three link network over time.

Â So, even though both of these were Pairwise Nash Stable.

Â This dynamic process is going to end up spending, as epsilon becomes very small,

Â most of its time in the three length network.

Â And so you can look at that by looking at the limit of this unique steady state

Â distribution. This is what's known as stochastic

Â stability in the game theory literature. It's a technique of, of refining

Â equilibria that has become quite powerful.

Â and Alison and I used this to, to, to develop methods of, of working through

Â dynamics in, in networks. And here it consists of prediction of

Â which network is going to be visited most often and stayed at for most of the time

Â in this process as epsilon becomes quite small.

Â And one thing to notice is that, as it's placing probability one here in the

Â limit. That's not the same, as the steady states

Â of the limit process. So if you actually look at the process

Â that had zero errors in it, you could have ended up in either the empty network

Â or the full network. So the steady states of the, of the limit

Â process, where you have no noise, is different than the limit of the process

Â with little bits of noise. So the little bits of noise here are very

Â important in making predictions. And let's go through sort of the

Â intuition of why it is that we're spending more time in the even though we

Â have two Pairwise Stable Networks, Nash Stable Networks.

Â Why is it that we're spending most of our time here and very little of our time

Â here even though this is Pairwise Nash Stable?

Â When we go to the limit we're spending most of our time here.

Â Okay. And the idea is that if we think about

Â trembling. So, how do we leave this state?

Â So, if we suppose we're at at the network with three links.

Â Well, we can make an error. If we make an error and bounce to one of

Â these, so there's probably epsilon of that happening.

Â most likely it's going to come back. You would need two errors in order to get

Â to a point where you might get to to the other network, to the empty network,

Â right? So you'd have to, you'd have to make two

Â errors in a row before you could get to a point where you're going to find a

Â network which then naturally sort of has a tendency to lead back to the empty

Â network or the other Pairwise Nash Stable Network.

Â In contrast, one error on this network leads you to something which then leads

Â naturally to the, to this. So when you've got errors, it's much

Â harder in some sense that this stochastic stability notion means this is a more

Â stable, the full network is a more stable network.

Â It's harder to get away from that network.

Â And it's easier to, relatively easier, and it only takes one error to get away

Â from the empty network. And so, what this does is, is it begins

Â to look at what is known as basins of attraction.

Â When you look at these dynamic processes, this one has a larger basin of

Â attraction. If you get to any of these networks, you

Â can get back to this one. To get to this one you really can only

Â get there from, from a smaller set of states.

Â And as, as you begin to look at this, this process more naturally gives you a

Â complete network than a smaller one. So, this is a powerful tool for looking

Â at dynamics. you can simulate these things so once,

Â you know, this we can solve for explicitly with, with only three, three

Â nodes. And relatively small number of, of

Â networks. As you begin to enrich these kinds of

Â processes, they're going to get more complicated, but you can begin to analyze

Â them by simulation. So the idea's here again to emphasize,

Â you need more errors to to leave the basin of attraction of the complete

Â networks. And to leave the basin of attraction of

Â the empty network. And more generally to solve these things,

Â you need to keep track of basins of attraction of many states.

Â so there's a theorem by Friedlin and Wentzel in the 1980s on Markov chains

Â and, and that allows one to leverage this and to do calculations of, of how these

Â things work. But more generally this is going to, this

Â kind of process is going to select some of the networks.

Â It's turning things into a Markov Chain. We know a lot about them.

Â even though it's difficult to get analytic solutions you can simulate these

Â things. Okay, so what we've done is we've looked

Â at beyond Pairwise stability looking at some dynamics, some stochastic stability.

Â there's other things in the literature that we're not going to touch too much

Â on, existence questions. Forward looking behaviors, another

Â interesting question but we wont have time.

Â We talked a little bit about transfers, there's other richer models of transfers

Â and bargaining to form networks. So, there's a series of topics which are

Â important in understanding modeling, that we you know, that I, you can get

Â references for from the reference list here.

Â but I'm not going to spend a whole lot of time going into in this course.

Â