0:01

A general theme that will be recurring in almost every single lecture in this course

Â is that we will see individual behaviors, often driven by self interest, aggregate

Â into a state across all the users, a global configuration in a social, economic

Â or technologic networks. And sometimes, it can aggregate into a

Â fair and efficient configuration. We try to quantify efficiency and fairness

Â in the course. And such, a phenomena of local behavior.

Â Aggregating into a global configuration that is desirable.

Â It's going to be helped by some kind of feedback signals, which could be say a

Â pricing signal, or maybe a congestion signal, or maybe a collision signal.

Â Some of these feedback are explicit. Some narrow element tells you a specific

Â action that you need to take. Sometimes that's implicit.

Â You measure something, for example the current SIR at a time T and then use that

Â implicit feedback to address your own individual behavior in the next iteration.

Â 1:22

Now we're going to understand the specific behavior in distributed power control,

Â from two angles now. And in that sense we're using DPC as a way

Â also to introduce us to the terminology, and the basic notions in two powerful

Â mathematical modeling languages. One is, the language for, constraint

Â decision making called optimization theory.

Â And the other, is the language for strategic thinking by intelligent agent

Â called game theory. Now lets start with optimization theory.

Â This is a word that we use in our daily language.

Â Alright. We try to optimize something.

Â We try to optimize our time, our, holiday schedule, our work schedule.

Â And if you think about it. In that optimization, you have some degree

Â of freedom, which we will mathematically call optimization variables.

Â You have some objectives you wanna maximize or minimize.

Â For example, maximize your employability. Okay.

Â Your happiness. Minimize the cost it takes to finish some

Â task. And then you have some constraints.

Â Without constraints the problem will be too good to be true.

Â This could be constraints, on the time you have on the money you have on energy you

Â have. For example when you look at your schedule

Â this weekend. You may say, well I can pick my variable

Â to be spend the time taking a course, like this one.

Â Or spend the time to watch a movie. An objective function may be, well, make

Â me as happy as possible. And the constraint, say, is you only have

Â 24 hours in each day. Now we will see a mathematically precise

Â unambiguous language called optimization theory.

Â And in each optimization problem, there are four main data fields.

Â One is, of course, the objective. Now, in the case of transmit power control

Â in cellular networks, the objective is to minimize the sum of the individual

Â transmit powers. So we will write, minimize the sum of the

Â PIs over Is. So, this is a power minimal configuration,

Â subject to the constraints that you have to achieve, the target SIRs for all the

Â users. The SIRs for each user indexed by i must

Â be no smaller than the target gamma i, and you need this to be hold, to be held not

Â just for single i but for all i. And then you need to vary the degree of

Â freedom, in this case obviously it is the set of transmit powers.

Â And everything else are constants including the channel condition GIJs.

Â The noises ni for each receiver and the target SIRs gamma.

Â Towards end of this lecture we also see what will happen if these target SIR

Â values become variables as well. But for now they are held constant.

Â Now once you have an objective function, a set of constraints.

Â And you know what is variable and what is constant, then you have an optimization

Â problem. Pictorially, what we are looking at in

Â this case of transmitter power control is visualizable in the following cartoon.

Â Now suppose we have just two users, so can draw in the 2-dimensional plane in a piece

Â of paper or slide. And the first user SIR is on the X axis,

Â the second on the Y axis. And we have a region, okay?

Â So shaded region here, that denotes a set of feasible SIR values.

Â Now ideally you want SIR for both users to be high.

Â But as you know, because of interference, that is not possible.

Â So let's just look at those SIR1, SIR2 values that are possible.

Â And we call these the feasible SIR values. So any point inside this region, okay, is

Â a feasible point. But there are also inferior points because

Â I could find a way to increase SIR for both users.

Â Okay. So any points in this region will be

Â strictly better than this point. At the same time, every point outside the

Â boundary is infeasible. What about the points exactly on the

Â boundaries? We call those points Pareto optimal.

Â Now clearly there are infinite number of these points on the boundary and they are

Â all Pareto optimal. In some sense, they are not directly

Â comparable. Points inside are feasible, but inferior.

Â Points outside are infeasible, so don't even worry about those.

Â Our job is to find a point that's at least on the pareto optimal boundary.

Â That means you cannot increase one dimension without hurting the other

Â dimension. Those points formulate from the boundary

Â on this region. Now which point to pick then.

Â That would depend on the objective function.

Â You have to impose some other objective function in order to pick exactly the

Â point that you like most. Now later we will see, this theme of trade

Â off. Okay, tradeoff in this case between two

Â users, SIR. And in general, the tradeoff between any

Â competing users in a social, economic, or technological networks.

Â And we will always want to operate on the parietal optimal boundary.

Â At least pick a point that is parietal optimal.

Â You cannot make one user happier without making, another user, less happy.

Â 9:44

Pi Gi - gamma summation of J not equal to i.

Â Pi Gi J + Ni Okay. Greater, then go to, zero for, all the i.

Â What kind of functions is this, in our variable.

Â Remember G and N gamma a constants, on variables P, well this is just some

Â numbers multiply Pi, this is just some number multiply PJ, and this doesn't even

Â depend on Ps. So this is actually an Fi function.

Â Hm, or when they say linear function, so we are talking about minimizing a linear

Â function, P1 plus P2, da, da, da. Pn, subject to the constraint, which is

Â also linear function in our variable P. When we minimize a linear function subject

Â to a linear constraints, we call that a linear programming.

Â Now, this programming has nothing to do with computer programming codes.

Â It refers to a mathematical program. So, linear programming, this terminology

Â dates back to 1940s refers to minimizing linear functions subject to linear

Â constraints in your variable. Of course if gamma is also variable, then

Â this is not a linear function. In both gamma and p But that's not our

Â worry today. Our problem today is fixed gamma, just

Â burry the ps. And this is an arrow P.

Â Now LPs are easy to solve, in fury and in practice.

Â But, as we will see in lecture four, when we talk about Netflix Prize and

Â recommender systems, we will see that, the watershed between easy and hard

Â optimization problems is actually not linearity but, so-called, convexity.

Â We will come to that in a later lecture. All right, so now we have defined what the

Â optimization problem is and we have recognized that it is a linear programming

Â problem. How can we solve it?

Â In fact, if you only need to solve this in the centralized computer, then there are

Â many ways to solve this computationally efficient way, but if you want to solve it

Â in the context of cellular network with distributed action, and no explicit

Â message passing between the base station and the mobile station, then that's not

Â easy. In the advanced material part of the

Â lecture, or in section 1.4, advanced material section of chapter one in the

Â textbook, we will see the proof of convergence of the dpc algorithm to an

Â optimizer, of this problem. In other words, the DPC algorithm is

Â actually a distributed solution method, to solve this specially nailed programming

Â problem, in the physical context of wireless cellular network.

Â 14:47

feasible solutions. Okay.

Â Now, what I mean by, no worse than. Depends on whether you're talking about

Â minimization or maximization. For a minimization problem, it means that,

Â it is at least as small as any other feasible solution will give you in the

Â objective function. So, if you pick any other feasible X star,

Â stick into the object of function, which is denoted as f in general.

Â Then F evaluated at this X star vector will be less then or equal to that unless

Â if you're minimizing an object or function.

Â Of course if you're trying to maximize a function, that the bigger the better, then

Â we say. X star, evaluated, at X star.

Â For this object to function, we'll give you an value for a problem that is at

Â least as big as any feasible X vector. This then we call, optimal solution.

Â Okay. As you can see sometimes there is no

Â optimal solutions. Sometimes there's one and sometimes there

Â are many in fact infinite number of optimal solutions.

Â Later we'll also try to make a differentiation between globally optimal.

Â Verses local de optimum. The definition we just talked about

Â comparison with any other feasible solutions that is global optimality.

Â If you say that this is no worse than any other feasible solution in the smooth

Â neighborhood around this point than we call that solution locally optimum

Â solution. So these are the terminologies about a

Â feasible solution. Globally optimal and locally optimal

Â solution, that we will be using again and again in the future lectures.

Â