0:00

So, discrete optimization, again, linear programming part two.

Â Okay, so this is getting really exciting. Okay, so what we saw last time was the

Â geometric view and linear programming and what we going to do now is do the

Â connection between this geometric view and the algebraic view.

Â Okay, so we going to link the two and this is where this thing gets really

Â nice. Okay, so remember what we saw last time,

Â is that we could solve this problem from a, from a geometry standpoint by simply

Â looking at all the vertices, and taking the one which is the best value for the

Â objective function, okay? Of course you know it's like, oh, how do

Â I get these vertices? You know it's like, hey it's crazy right,

Â writing an algorithm to get these vertices.

Â doesn't seem so simple. Okay.

Â We can do that by hand. But how do we do that, you know, on a

Â computer? And that's the key point here.

Â Okay? So the simplex algorithm is essentially

Â a, a, an intelligent way of actually exploring these vertices.

Â And as I told you, you know, this is where the connection between the geometry

Â and the algebraic view, the, is, is really interesting.

Â Okay. So once again, I told you that, you know,

Â this algorithm was invented by Charles Dantzig.

Â And one of the things you have to understand is that this, this algorithm

Â is actually still, after, you know, like, 60, 65 years, actually, wow, this is 65

Â years. After 65 years it's still a complete

Â enigma, right. It works incredibly well in practice,

Â always solving this problem really fast in general.

Â But it does an exponential worse case so in the worst case it's terrible right.

Â But this worst case doesn't seem to happen in almost never okay.

Â So you have to you can construct an example where it can happen but it don't

Â seem to happen in practice. So from a theoretical standpoint, nobody

Â really understands why this is the case. And they are really some very nice

Â theoretical issue, which are very simply state, that nobody knows the answer to.

Â Okay? So very interesting algorithm.

Â And so what I'm going to do today is to actually Try to present this thing in an

Â intuitive way, what the simplex algorithm do.

Â And so this is, this is the wa-, this is the way I would, you know, I could

Â characterize this algorithm. Okay.

Â It's very simple, right? So what you want to do is to be on top of

Â the world. Okay.

Â So that's the goal, okay? The goal is to go, to be on top of the

Â world. Now, I'm going to give you five facts,

Â okay, that will allow you to go, to be on top of the world, okay?

Â The first fact that I'm going to give you is that the top of the world is the top

Â of a mountain, okay? Basic fact, right?

Â No, the string in fact, is the most interesting one, okay?

Â So the top of the mountain is a beautiful, fantastic spot, okay?

Â We are going to be talking about beautiful, fantastic spot all the time.

Â And I call this guy BFS, okay? So, beautiful, fantastic spot, okay?

Â So you have to to know, okay, that the top of the world is at the top of a

Â mountain, and the top of a mountain is a beautiful fantastic spot.

Â Okay? Now, the third thing that I'm going to

Â tell you, okay, the third fact, is that you can move from a beautiful fantastic

Â point to a neighboring beautiful fantastic spot.

Â That's easy to do. Okay, you are at the beautiful fantastic

Â point. You see another one.

Â You can move there. Okay.

Â So, and then the fourth fact is that, and this is important, when you are at the

Â top of the world, you know you are at the top of the world.

Â Okay, fact four, okay. And then the last fact is that if you

Â aren't a Beautiful Fantastic Spot, there will always be an opportunity to move to

Â a higher Beautiful Fantastic Spot. Okay?

Â So if you understand these five points, okay, so you understand linear

Â programming. Okay?

Â So this is as simple as that. Okay?

Â You know that the top of the world is a top of a mountain.

Â You know that the top of a mountain is a Beautiful Fantastic Spot.

Â That you can move from Beautiful Fantastic Spot to another neighboring

Â Beautiful Fantastic Point. When you are on top of the world, you

Â know it, okay. And then, every time you are at a

Â beautiful, fantastic spot, there is a way for you to go to a more, to a higher, you

Â know, beautiful, fantastic spot. Now, so this is too simple, right, and,

Â and we have to make it complicated. And that's what we're going to, that's

Â what I'm going to do, okay. I'm going to do on the next slide, okay.

Â I'm going to take these facts and make them a little bit more complicated, okay.

Â Because otherwise, people will say, what you do in science is too easy.

Â I mean, we're not the only one to do that right?

Â So you know, a doctor would talk abut the distant part of your fibula.

Â What is that, right? So can't they say, this is the part of

Â your fibula next to your ankle? No.

Â They would say, beautifully, you know the distant part of your fibula is the

Â problem. Right?

Â Anyway. So, lawyers sound the same right?

Â So when you buy a hose in the US they would say, they would write things like

Â expected performance. And you say, wow, what is this expected

Â performance? That just means you have to buy the host,

Â okay? But, you know, so we going to do the

Â same. We're going to take this very simple

Â algorithm and make it look complicated, such that people believe we are really

Â clever. Okay?

Â So this is what we're going to do, okay? So, we, we, instead of saying we want to

Â be one top of the world, we want to solve a linear program.

Â Okay? Fact number one, you know, what I told

Â you is that the top of the world is at the top of the mountain.

Â That's basically telling you that the optimal solution is located to the

Â vertex. So when you think of a vertex, think the

Â top of a mountain. Right?

Â Now, a vertex, okay, is actually a beautiful, fantastic spot but we can't

Â talk about beautiful, fantastic spot so we going to say it's a basic feasible

Â solution. Okay, but let's say BFS, you know, you

Â can think BFS, basic feasible solution or beautiful fantastic spot.

Â Okay, and then the three other facts are going to be essentially the same, because

Â we are talking about BFSs, right? So we can move from a BFS to another one.

Â Beautiful fantastic boat to a beautiful fantastic boat, a basic feasible solution

Â to another basic feasible solution. You know that you can detect if a

Â beautiful fantastic spot is at the top of the world, okay.

Â you, you can detect it so basic feasible solution is optimal.

Â And then you can move from any BFS to a not a BFS which has a better cost.

Â Okay? Well, smaller cost if you minimize, you

Â know, higher cost if you maximize. Okay.

Â So this is once again, a little bit more complicated now.

Â We are talking about this basic feasible solution, that we don't really know what

Â they are. But this is the same thing as what I've

Â shown you on the previous slide, okay? But this is the simplex algorithm.

Â So, what I'm going to do in the lecture is going to, through these various facts

Â and telling, and explaining how they work.

Â Okay? So, we know what a linear program is,

Â right? So, minimizing this linear function and

Â then these constraints. Okay?

Â And all the variables have to be graded on 0.

Â We'll come back to that, this is an important part, okay?

Â So, so let's start, let's back up a little bit.

Â Okay, I'm back, backing up, okay, right? And so, what we are looking at here,

Â first. We have a system of linear equation and

Â we want to solve it. How do we do that?

Â How do we do that, okay. Go back to high school.

Â That's what we did, right? And so essentially what you do is you

Â perform you know, Gaussian elimination to actually solve a system of linear

Â equations like that, right? So you basically express some of the

Â variables in terms of the other ones, okay.

Â And so you basically isolate a bunch of variables, x one to x n and you express

Â them in terms of the other variables. And so this is basically what we call a

Â basic solution. The variable on the left sides are going

Â to be called the basic variables. Sometimes we tell them they are the basic

Â Right? So, but they are the basic variable.

Â Okay? And we going to love these guys.

Â Okay? You'll see why.

Â And the, and the other variables, I call the non basic variables.

Â Okay? They are the guys that we don't really

Â care about. Okay?

Â And then of course you have the bs which are the values that we're going to give

Â to the basic variables. Okay?

Â So, this is a basic solution. Why?

Â Because essentially we're going to take the basic variables that you see there,

Â we're going to give them as values, the b's.

Â Okay? And then we're going to take the basic

Â variable, the nonbasic variables and we're going to give them all zeros, okay?

Â And if we do that obviously these equations are going to be satisfied,

Â right? I'm basically saying xi is equal to b, x1

Â is equal to b1, you know, and xm is equal to bm and all these guys are zeros so

Â these equations are obviously satisfied. Okay?

Â So this is a basic solution. Okay?

Â So I'm basically taking all the xi giving them the bi.

Â I'm taking out all the non-basic variables, assigning them to zero.

Â So in a sense, think about this. In this is some of equation, there are

Â bunch of variables that are all zeros and that happens all the time in linear

Â programming, right? So you will have a massive amount of

Â variables typically assigned to zero and then these beautiful basic variables are

Â going to be assigned the bi's that you see there, okay?

Â So that's a basic solution, okay? We'll talk about basic solution.

Â Now remember a beautiful fantastic post and this F in there.

Â Right. So this is feasible or fantastic okay.

Â So you going to be fantastic or feasible when you satisfy these constraints, okay.

Â So you have to make sure that all the variables.

Â All the variables here have to be greater than 0 okay.

Â And so how do you that? Well, you know for the non-basic

Â variables it's easy, right? So all these guys are already zero.

Â But you want to make sure that these guys are assigned non-negative values, okay?

Â And so that means testing when you actually isolate the X1 to XN, that these

Â guys are actually non-negative, okay? And if they are non-negative, what you

Â have is a beautiful fantastic spot. You have a basic feasible solution.

Â Okay? So that's what you have here.

Â Okay? So a basic feasible solution is a

Â solution where you isolate some of the variables, x 1 to x m.

Â It can be any one of these variables, right?

Â So they are called the basic variables. They are assigned these bs that, you

Â know, that you have, and all of the non-basic variables which are there are

Â assigned to 0. Okay, basic feasible solution, okay.

Â You're going to say, why this, this guy actually obsessed with this and you'll

Â see why in a moment, okay. So, how do we find a beautiful, fantastic

Â spot? We select environments, okay.

Â So there will be the basic feasible, the basic variables.

Â We re express them in terms of the other ones, okay.

Â Not in terms of each other, right. Only in terms of the non basic variables,

Â okay. We can do that easily by performing

Â Gaussian elimination, okay. And if all the non, if all the b's are

Â nonnegative, we know that we have a basic feasible solution.

Â Okay. So, you're going to tell me yeah, yeah,

Â yeah, yeah, yeah. But you are dealing with equations now.

Â But in my simplex algorithm, I mean inequalities, right?

Â So what are we going to do? Well, you know there is a very simple

Â thing that we can do. Okay?

Â If you have a set of inequalities like this, okay?

Â What you can do is always you can add some new variable okay?

Â From S1 to SM, okay? And transform this inequalities into,

Â into equations. okay?

Â So this adds one to the left and it has to be greater or equal to zero, so you

Â add them and there is one for everyone on of the contraints.

Â They are all forced to be greater than zero and we call these guys the slack

Â variables. And the intuition here is very simple,

Â look at the expression you have there. They will have a value when you assign

Â the basic variables, and then there will be the value of the b's.

Â Okay. So typically they have to be smaller or

Â equal to that. So they may, so either they are equal, in

Â which case the slack variable is going to be zero, otherwise the slack variable is

Â going to be the difference between the b and then the, you know the left hand

Â side. Okay.

Â So essentially they are making, they are transforming this inequality, okay.

Â They are transforming these inequalities into equations by picking up the slack,

Â and really having a variable capturing that slack, okay?

Â So, very simple, right? So if you have a long, if you have a

Â linear program with inequalities, you can transform these inequalities to

Â equations, and that's what we want to do, when we are actually implementing this

Â method on a computer. Right?

Â Okay? So, once again, to summarize, you start

Â with your linear programs, you take all these inequalities, okay?

Â You have slack variables and you have equations, and now you select m

Â variables, so these are going to be the basic variables.

Â You re-express them in terms of the non-basic variables, performing Gaussian

Â elimination and then if all the b's are non-negative you have a basic feasible

Â solution, you have a beautiful fantastic spot, okay?

Â So, so, now, this is the key point. Okay.

Â This is the most important fact that you need to know.

Â A vertex, in the geometrical sense that I've shown you in the last lecture, is

Â actually a beautiful, fantastic spot. It's a basic feasible solution.

Â Okay. So in a sense now, we have a beautiful

Â algorithm. I'm, I'm giving you an algorithm, right,

Â that you can actually code in a computer. Right?

Â So the naive algorithm that we have now is that we just want to look at all the

Â vertices. Okay.

Â But the vertices are basically a basic feasible solution.

Â So you can just enumerate all these basic feasible solution, and then take the one

Â which is best. So you generate all the basic feasible

Â solution. How do you do that?

Â Well that's just what I explained to you, right?

Â You isolate end variables. You express them in terms of the

Â non-basic variables, okay? And then you perform Gaussian elimination

Â and then you can test if this is feasible.

Â If it is feasible, you have a basic feasible solution, okay?

Â So I have noticed, you know, naive algorithm which you can implement on a

Â computer which will generate all these basic feasible solutions and for everyone

Â on of them you can computer a value of the objective function.

Â You essnetially plug the values that are in that solution Okay?

Â And then you select the one which has the best cost.

Â Now how many of these? Well, that's the issue, right?

Â So they will be a very large number of them, right?

Â It's like picking up m variables inside the m, and you want to explore all these

Â combinations. Okay?

Â So there are many of them. Okay?

Â And so, in practice, the naive algorithm that I'm showing you here Is not going to

Â work you know, nicely from a computational standpoint, from a

Â performance standpoint. But we'll see how to improve it, you

Â know, in the next lecture. Okay?

Â So, basically summarizing what we have seen.

Â Okay? So, what you want to do.

Â Okay? Is to be on top of the world.

Â Is basically a vertex. Okay?

Â And what you can do To get to. And a vertex is a basic feasible

Â solution. And getting a basic feasible solution is

Â actually pretty easy. It basically consists in isolated end

Â variables, okay, expressing them in terms of the other one, and testing the value

Â that you assigned to them is actually not negative.

Â Okay? We'll see a more clever way of actually

Â exploring all these vertices, basic feasible solutions You know next time.

Â Thank you very much.

Â