0:25

And application layer or rate the physical layer.

Â Later in, lecture thirteen soon we'll be talking about different layers and the

Â different, metrics of quality on different layers.

Â It could also be delay. It could be jitter, which is the variation

Â delay. It could be energy you spend.

Â It could be the distortion of your video. It could be any of these. .

Â Although, in most of our lectures, we'll be looking at rate of throughput in the

Â argument. In general, this is a function whether

Â it's a increasing or concave function, or not.

Â We'll say a few words about that in a minute.

Â And then, you're going to sum up across all the users of the utilities.

Â This is the sum utility or the so-called network utility, okay?

Â A network utility maximization or, in short, NUM.

Â We'll come back to this a few more times later in this course.

Â That's the objective function. What about the constraints?

Â There can be all kinds of constraints, technological constraints.

Â For example, a router cannot support, both large number of degree and large number of

Â bandwidth per degree. You know, it could be economic

Â constraints. For example, I am not willing to pay for a

Â certain cost in order to provide that much bandwidth supply.

Â And by the way, when we talk about the word, bandwidth, okay we often actually

Â mean capacity in bits per second rather than, the width of the frequency band

Â being used. Okay?

Â Even though a wider band other factors remaining constant implies a bigger

Â capacity. But these are two physically, conceptually

Â different terms. But, so happens increasing a lot of people

Â use the word bandwidth. I guess it's a cooler name than the word

Â capacity. So sometimes we stick to that incorrect

Â terminology because it's used almost universally.

Â Now what about the variables? Well, if these, depends on what's your

Â argument in an objective function, okay? If these are the arguments, maybe our

Â variables could be the raise on purpose themselves or it could be some other

Â design factor that would impact these different metrics.

Â So maximizing the summation of utility function of some arguments is subject to

Â some kind of constraint that says the variables denoted by some vector x and y,

Â live in certain constraint space x and y and so on.

Â 3:05

Now, good to formulate and try to solve, hopefully distributively of many of these

Â problems. So, let's look at this utility function,

Â each term. Okay?

Â Let's say the argument is xi. Could be, say the data rate for a

Â particular n user I. And the shape of the function is denoted

Â by, okay, this function u sub i. So what kind of function could this be?

Â You see three functions, curves here in this plot.

Â Ux over x is a single variable function. This function a is, let's say a

Â logorithmic function. We'll see that there is a special fairness

Â notion associated with log functions in a minute.

Â So, it is quite nice. It's smooth, increasing, and, concave.

Â But you see this curve, B here is actually not smooth cuz there is a jump at this

Â point. Below this point, you get no utility.

Â Above this point, you get a strictly positive, constant utility,

Â Okay? So there is a jumping point.

Â Now, graph C is interesting. It's got some concave.

Â A convex part, and then it becomes concave to the point of bending completely flat.

Â Meaning that to give me more amps, it doesn't increase my utility any more.

Â So it goes through a convex. Part.

Â Then you went through a concave part, including part of which that is completely

Â flat. So not only does arguments of

Â geo-functions can be of various strengths, the shape can also be quite different,

Â okay. So, in particular, is the smooth, we often

Â deal with smooth functions. In fact, in all examples, you will see it

Â will be smooth, but it doesn't have to be. Is it increasing?

Â Pretty much always increasing because we're talking about something good; it's a

Â utility function. If you are talking about something that

Â measures how bad it is against some argument maybe the answer is different.

Â And third, is a concave nod. Now, increasing the nod is talking about

Â the first derivative, the utility function.

Â We say, you know it is non-negative. What about concavity or convexity?

Â And this is talking about the second derivative, okay.

Â So, concave means that it is smaller than equal to zero.

Â Whereas convex means, as you may recall from lecture four.

Â That the second root is bigger than or equal to zero.

Â Now of course this is talking about a single variable.

Â If a multivariable function then, this is a gradiant vector.

Â And these second derivatives is not a scaler anymore, but the Hessian matrix.

Â Again back in lecture number four. So,

Â 6:27

Even if you get happier and happier, the rate with which you become happier, will

Â be going down. So, for example, this kind of function is

Â still going up, up, up, it never drops. But, how fast you'd go as a function of a

Â unit increase in x x's that slows down. Okay?

Â In other words, the second derivative is negative,

Â Okay? This is an increasing function, but the

Â rate of increase is actually negative. So.

Â This is usually the case as a concave function.

Â But, it doesn't have to be concave at the beginning.

Â It says, eventually it will bend over but before eventually, it can go actually like

Â convex and then concave with inflection point.

Â Like what we observed in some of lecture seven and lecture 8's curves in diffusion

Â models. Okay?

Â And, some of the other influence models in the crowd.

Â So you can also have a sig model function with both convex and concave part.

Â In general, if a smooth increasing in concave, then it's a much nicer function

Â to deal with mathematically. Where do these utilities functions come

Â from? Usually there are three ways, one is you

Â can run an experiment with real human beings in a focus group study.

Â Put them in sofa, give them pizza and then say, well, please watch these videos or

Â listen to these voice calls. And then you rank them, you rate them one

Â to five stars. And then you vary the different kind of,

Â of voice cause and medias you present to them.

Â So these lead to often what's called MOS, Score Mean Opinion Service Score.

Â Okay? You run psychological experiment.

Â The second. Reason why certain utility functions are

Â picked is because of what they represent in demand function.

Â 8:30

Okay? Utility function and demand function, in

Â some sense, are two sides of the same coin.

Â Because we're going to model individual decision making as a net utility

Â maximization. Very similar to what we saw in lecture two

Â with the auction model as a game. Where we say that, each individual here

Â says, I will get this much utility, okay? I'm user I.

Â U I of XI, good but this is now free lunch, there is no free lunch, I have to

Â pay something for example PI per unit of whatever I'm getting see?

Â Bandwidth or rate is per second, so for each bit per second I have to a unite

Â price PI and I'm getting this much XI in volume, so the total cost for me is piXI.

Â Times xi. This is how much I'm getting, this is how much I'm paying, the

Â difference is net utility. And that is what I would like to maximize

Â over my individual choice, x i. Okay, so in the market, you'd give me the

Â price. I react to the price by picking xi.

Â Well, if this u is a nice function, a smooth increasing and concave, then this

Â is a nice concave maximization xi, concave part, linear part.

Â So we can just take derivative and say u prime of x suppressing the subscript I for

Â simplicity notation, okay, minus a p. That's the derivative, and let it be zero.

Â That means, u prime of x = p. If a particular x such as father's

Â equation, call that x star. Then, that is the induced the demand,

Â induced to buy this price P given to me. And it turns out that utility's first

Â derivative is an invertible and a decreasing function because of our model

Â of utility function. So, x star is u prime inverse as a

Â decreasing function of the price P. So we call this u prime inverse, demand

Â function d. So d is a u prime inverse.

Â Now, give me a utility function. And take the derivative, invert it.

Â You've got demand function. Give me demand function.

Â I can also reverse engineer the utility functions.

Â Coming this way from utility demand function, you can do a simple exercise of

Â how alpha fair utility function coming up. A particular special case is just utility

Â being log of x. Okay?

Â They are the reason and so demand function is just one over p,

Â Okay? Easily convinced south about that.

Â And then, in lecture fourteen, we will talk about the other direction, give me a

Â demand function. I figure out, then try to reverse engineer

Â the underlying utility function. And the key concept in modelling demand is

Â so-called demand elasticity. Okay.

Â 13:51

Well see, suppose you tell me there is a vector x let's just say x,

Â Okay? So vector allocates some good resources

Â x1, x2 up to xn among the end users competing against each other.

Â And I say, that this is a particular function, vector is that alpha fair

Â allocation, vector if here's the definition, okay.

Â If for any other feasible vector y or such y.

Â If I look at the difference for each user, Between getting something from x factor

Â versus getting from the y allocation. And then I normalize it by Exxon through

Â the power alpha. There's where the alpha comes in.

Â That's some kind of alpha parameterized normalization of the difference in

Â drifting away from x allocation to y allocation.

Â And then I look at the joint impact, the collective impact to all the users in the

Â system summing over r. If this summation which is a single

Â scaler, is negative, Okay?

Â Less than, equal to zero then I say, this vector x is alpha-fair vector.

Â Now, that's a mouthful for definition. Let's process it again.

Â We say a vector x is alpha-fair if for all any other feasible allocation y, you can't

Â give me an unfeasible allocation and ask me to compare, okay.

Â Any other feasible allocation y, if I look at the collective.

Â Alpha profit must normalize the drift from x2 such vector Y, is always a bad thing.

Â Okay, the sum is negative, then, I say this vector indeed is alpha fair

Â allocation. All right, now you can disagree with the

Â definition, but it turns out to be a useful definition and therefore a good

Â definition. So, what about our utility function?

Â This is a vector not a function. Well it turns out that without proving

Â this, we'll just state the, the result, it says, if you maximize the following

Â parameterized utility function parameterized by alpha, so I'm writing

Â alpha in the subscript for this utility function as a function of x.

Â If you maximize this function, what does it look like?

Â What is a branching definition? It looks like it's simply, x to the power

Â one minus alpha, over one minus alpha. If alpha is not one, okay.

Â If alpha is one, this denominator is not defined, but you can use Lapatel's rule,

Â and see that when alpha is one, this actually is log of x,

Â Okay? Now see the familiar and important special case of log utility.

Â So, if this is the definition of your utility function,

Â Then maximizing such utility function will lead to an answer of the optimizer of x

Â star that satisfies the definition of alpha fair allocation.

Â That's why we call this the alpha fair utility function.

Â Another name for it is called Isoelastic Utility Function.

Â Okay. Why isoelastoc? It gets, gets back to our

Â notion in the last slide, Okay?

Â In a homework problem I think you easily verify that if you take this definition of

Â u of x, you will see the resulting demand function,

Â Okay? And then that implies the elasticity.

Â It is actually one over alpha. In particular, when alpha is one for large

Â utility, the corresponding demand function is just one over p and the corresponding

Â normalized demand elasticity is just one over alpha.

Â That is just one. In that case, one alpha is one.

Â Okay. But in general, for other non-one alphas

Â is one over alpha s eta. That means the elasticity of demand is all

Â independent of everything else, okay? Except the shape.

Â This problem to alpha. That's why its called isoelastic.

Â Later, we'll see many good uses of isoelastic or alpha fair utility function.

Â Including the following special case. When alpha is zero, what do we have?

Â That's simple. We're just looking at the allocation

Â itself. It's just of sum allocation for example,

Â some rate in maximization. When alpha is one, that's a log utility

Â function. It turns out that at least two of what's called a proportional fare.

Â If you look at this definition here, when alpha is one it means that your

Â normalization is simply xi itself, you're not trying to skew the shape by any means.

Â And, whether you like the definition or not, you can see why this is called alpha

Â proportional fair'cause proportional back to normalized by X.

Â So, when alpha is one, you have this special case called proportional fairness.

Â Now, when alpha's really big, for example, alpha approach infinity, then it turns out

Â that it approaches what's called a max-min fair.

Â I will later talk more about fairness, so let's leave it just like that for the

Â moment. And conclude this part of the video module

Â with simple application of this utility model,

Â Okay? Whether it comes form focus group study or

Â fairness or demand elasticity. Apply this simple utility model to talk

Â about a very important phenomenon called the Tragedy of Commons.

Â Now, this was first formally mentioned by Lloyd a long while back, 1833.

Â And then codified and popularized by Harding's article in 1968.

Â Now this article had other kinds of provocative and somewhat controversial

Â arguments. Tragedy of Commons is only one small part

Â of it but we have seen a liberal use of this term, tragedy of Commons, quite a few

Â times. In lecture one, when we talk about power

Â control, interference. That's the trade of commerce in the air.

Â When we were in lecture two okay, transferred commerce in terms of second

Â prize correctly internalizing the externality imposed on other option

Â competitors. In lecture, I think six, okay, voting

Â theory, okay. The K degree of externality imposed on

Â other voters and the need for something like voter count.

Â And later, in chapter lecture, I think, fourteen for TCP, fifteen for P2P and

Â eighteen for WiFi. Okay?

Â In all of these cases for tech networking discussion we will see a different

Â manifestation of Tragedy of Commons but the reason it was called Tragedy of

Â Commons was because it was a, a thought experiment, okay. Just like this binary

Â number experiment in lecture seven. It says that suppose that you've got a

Â piece of common grassland as the commons, okay, and there are different farmers and

Â each farmer says okay, should I get another cow or not?

Â Okay, here's a cow. Well, it looks like a, a mouse, but let's

Â say it's a cow, right. So, should I get a cow now?

Â Well, if I get a cow, the cow's going to need to eat grass, , okay.

Â But it, you know, the cost of eating the grass is like if there are n farmers, is

Â like one over n, proportional to that. But the benefits to me of the milk and

Â maybe selling the cow for its meat later down the road is one, one unit, okay.

Â