0:00

25 slides. In 25 slides you would have seen

Â everything about, you know, the Simplex algorithm to actually code it and know

Â exactly how it works. Okay?

Â So this is what we're going to do today, we're going to talk about the last steps

Â of the Simplex algorithm and some of the most complicated stuff.

Â Okay? So we know, what do we know so far?

Â So we know that the optimal solution is located on a vertex, and a vertex is a

Â basic feasible solution. What is a basic feasible solution?

Â That's a solution is in which you isolate, you know, as many variables as

Â there are constraints. Okay?

Â These are the basic variables. You express them in terms of the

Â non-basic variables. Okay?

Â And if the Bs you know, the constants that you get on the right hand side are

Â all nonnegative, you have a basic feasible solution.

Â Okay? So we know that in a very, very, very

Â naive algorithm which was basically any variable on this basic feasible solution.

Â And look at the one which has the best cost, and that would be your solution.

Â The issue with this is that this will take a long, long, long time.

Â Okay? Because there are many combinations of of

Â basic feasible variables that you can select.

Â Okay, so what the simplex algorithm is, is a better way of actually doing this,

Â okay, and so the key point, the third thing is going to basically telling you

Â that we want a numeric. All these basic feasible solutions, we

Â are basically going to move from basic feasible solution to basic feasible

Â solution, and trying to improve the value of the objective function all the time.

Â So, in a sense you know, the simplex algorithm is a local search algorithm,

Â okay? And the basic move is you know, the basic

Â local move that is applied is moving from a beautiful fantastic spot to another

Â one. Okay.

Â Moving from a basic feasible basic solution to an adjacent basic feasible

Â solution, and will tell you exactly, you know, how we move from one to another.

Â That's the key point of this lecture. Okay.

Â And so, the only thing which is really nice about the simplex algorithm is that

Â because of complex, convexity. Okay.

Â It's guaranteed to find the global optimum.

Â So it's a local search algorithm, but it's guaranteed to find the global

Â optimum. Okay, so the key idea is basically going

Â to be telling you how we can move from BFS to BFS.

Â Okay so this is essentially what we're going to be doing.

Â Okay so you have this systems of equations that you see on the top, right?

Â And then we find a basic feasible solution and that's what you see here, so

Â we isolated x3, x4, x5, okay, we re-express them in terms of the other

Â basic variables, in this particular case x1, x2 and xx, okay, and now we have a

Â basic feasible solution, okay so we give the value to x...

Â 3 in x4 and 5 and give the value 1 to 3 and that's the basic feasible solution.

Â All the non basic variables are assiged to 0.

Â Okay? So, now, how do we, so, so, basically the

Â local move that is simplex algorithm is going to do is taking one of the non

Â basic variable, okay, and moving it inside the bases, which basically taking

Â to basic variables. And of course, you have to keep one out,

Â okay? So you're going to take one of the basic

Â variables and kick it out, okay? So this is the basic idea, that's a very

Â simple local move, you swap two variables, okay?

Â One which is basic, one which is non-basic and the status of these two

Â variables is basically changing, okay? The basic becomes non-basic, the

Â non-basic becomes basic, okay? So this is the local move right, so you

Â select a non-basic variable And that known basic variables has to be a

Â coefficient, okay, negative has to have a negative coefficient.

Â We want to take it from the right hand side and moving to the left hand side,

Â okay? And to do that you have to have a

Â negative sign, okay? So you take the value that is negative

Â side, you're going to move it on the other side, and of course the other

Â variable's going to move to the non-basic site and also change, you know, sign.

Â Okay? So, in a sense we find the entering

Â variables. That's the variable which is non-basic

Â and has a negative coefficient. Okay?

Â And then, we going to find the value in the, in, in basic variable is, one

Â constraint and one basic variables that we going to kick out of the basics.

Â Okay? Make, you know, non-basic And then what

Â we going to do is re-express all the equations in terms of the new basic

Â variables, okay?. So we going to basically eliminate the

Â new basic variable from the other equations, okay?

Â And so that's essentially the local move that we are doing.

Â So basically local move here swapping a basic variable and a non-basic variable,

Â okay? And the way you do it, you choose an

Â entering variable, you choose a leaving variable, and then you do Gaussian

Â elimination to eliminate the new basic variables that you selected.

Â Okay, so I'm going to show you example, okay.

Â So once again, this is a system of equations, okay.

Â You see the beautiful basic variables there, the non-basic variables over

Â there, okay. Now, I'm looking at the non-basic

Â variable in this particular case X2, okay.

Â It has a negative coefficient, minus 2, and therefore I decide that I want this

Â variable to go into the bases. Okay.

Â I select the first constraints here, that's the only one that it appears to,

Â so it's easy. Okay.

Â And essentially what I'm going to do is swap x2 with the variable which is on, is

Â in bases inside the first constraints, and that's x3.

Â Okay, so I'm basically swapping these two guys, now x2 is going to move on that

Â side. x three is going to move on the other

Â side. Know the coefficient is minus two so I

Â will have to divide the equation by two such that the coeffiecient of x two is

Â going to be one over there and so I get these constraints, no?

Â Where you see that this is one half you know and then the coefficient of x one is

Â minus three half and the coefficient of x three is minus one half.

Â Okay, and that's the constraint that you see there.

Â Now, you know, in this particular case it was very convenient because x2 didn't

Â appear in any one of the, the other constraints.

Â And therefore I don't even have to eliminate it with the other constraints.

Â In general, you would have to actually take the volume of the, of the right-hand

Â side on, in, in, in this equation and remove it from the other equation.

Â Okay? But this is basically telling you what we do. Okay?

Â So we select the variable that we want to enter the basis.

Â X2 in this particular case. We select the constraints, and by

Â selecting the constraints, you select a variable that you want to kick out of the

Â basis, x3 in this particular case, and you swap these two variables.

Â You make sure that you get a basic feasible solution by you know, making

Â sure the coefficient of x2 is 1 and then by removing you know x2 in this

Â particular case from all the other equations.

Â Okay? So that's the basic move that the simplex

Â algorithm does. Okay?

Â Now, I selected x2 to enter the basics, I could have selected x1.

Â Right? So because x one also appear with a

Â negative coefficients there and therefore I could have selected x one from, you

Â know, as, as the, as, as the entering variables for the local move.

Â Now if I select x1 here, I have basically three negative coefficients.

Â And therefore, I can choose which constraint, you know, I want to, which

Â non, which basic, existing basic variables I want to kick out, out of the

Â basics - I want to make non-basic. Right?

Â So can I take any of these num, the, these basic variables and make them

Â non-basic? Well, look at this, okay?

Â If I take the third one, okay? So I'm basically saying, oh, you know, I

Â want to kick out, you know, x y for all of the bases.

Â Okay, so what is going to happen? Well I perform Gaussian elimination for

Â all the three constraints, and then what do I get?

Â I get that x three now is a sign of value minus x, x four is a sign of value minus

Â 4, and that's terrible right? So because this is not a basic feasible

Â solution. I told you that a basic feasible solution

Â assign only positive values to the variable because all the variables have

Â to get a positive value. Okay, so what happens, okay?

Â So what happens is that I selected the mentoring variable.

Â That's all fine. You know, I, I can do that.

Â But then I selected one particular constraints, and one particular basic

Â variables to remove from the basics, to make non-basic, okay?

Â And when doing that I perform Gaussian elimination, and then suddenly, what

Â happens is that the value of the, the existing basic variables, okay, was

Â becoming negative. And that's terrible, okay.

Â So what does that mean? Does that mean that at that point, I'm

Â stuck on this beautiful, fantastic spot, and I can't move?

Â No, it just means that I have to move a little bit more carefully, okay?

Â And so the key point is that you have to choose, essentially, the leading

Â variable, using the formula that I'm showing you here.

Â And I'm going to give you an intermission here.

Â Okay? So, when you look at this column, okay,

Â so essentially you see various coefficients for these nonbasic variable.

Â Okay? And so this coefficient is large, okay?

Â This coefficent here is much smaller. Now, I also look at the b variables

Â there, okay? And so what you can do is if I take the

Â first constraints here, if I take the first constraint.

Â And to, to make x one into the basics and, and for that popular constraints and

Â therefore remove x three, the value of x one is going to be what?

Â It's going to be basically dividing this coefficient, dividing the b by this

Â coefficicent three over there. So it's going to be one third.

Â Okay? And so that's a small number compared to

Â what would happen if I make it, if I move it.

Â You know, if, if I, if I make it enter the basis using the 3rd constraints.

Â In which case, x1 is going to take the value of 3.

Â Now why is this important? Because if x1 is the value 3 here, it's

Â going to be multiplied by this 3 there, and that's you going to get a minus 9

Â there. Okay?

Â And therefore, 1 minus 9 is going to be the minus a that you seen before.

Â So if i take a very large value here, okay, and I substitute it the other ones

Â you going to get these negative values. So what I need to do is to find the

Â smallest ratio between the B's, okay the B's that you see there, and the

Â coefficient of the variables that has to enter the basis.

Â Okay, and so this is, these are the ratios that I have shown you.

Â This is basically the bi, you know, divided by minus the coefficient of the

Â entering variables, okay. And you see that the one which is the

Â smallest ones at the top, and if you actually, you know, choose the first

Â constraints, as the constraint from which you select the basic variables that you

Â will remove from the basis, then you are guaranteed in a sense to make sure that

Â you go to a basic feasible solution. Because you are making sure that the

Â other constraints, when you do the Gaussian elimination, are going to stay

Â positive, okay? So essentially what we do, we use this

Â ratio to select the leaving variables, okay?

Â Using this formula, here. And we are guaranteed to maintain

Â feasibility, okay? So, in a sense, the look and move, you

Â have to be a little bit careful. You take any kind of entering variables,

Â okay, and then you use this route to find the variables that you have to kick out

Â of the basis, okay? So, and you compute this ratio, it's very

Â easy to do. But you have to be careful in do that.

Â Okay? So, in this, this particular case, you

Â get this beautiful basis where, as I, you know, alluded to, x1 is going to be 1

Â 3rd, and then you see x4 and x5 we get also fractional value, you know, 4 3rd

Â and 8 3rd. But this is a basic feasible solution.

Â These guys now are positive, okay? And so, essentially when you move from

Â basis to basis, you have to make sure that you get a basic feasible solution

Â there, and so you have to make sure that when you perform the Gaussian

Â elimination, the right value are going to emerge.

Â And that's what you do, by selecting this the, the, the leaving variable, using the

Â rules that I've shown you. Okay?

Â So, to summarize, we move from basic feasible solution to basic feasible

Â solution by selecting an entering variables that is to be a non-basic

Â variables, which has a negative coefficient in some of the, in some of

Â these constraints. Then you choose a leaving variable,

Â giving the, the, the, you know, the formula that I've shown you.

Â You compute these ratio, you select the constraint which is the smallest ratio,

Â okay, and then that's the variable that, that's the basic variable that will leave

Â the bases and then you perform, you know, Gaussian elimination.

Â Now, this entire set of operation here is called pivoting, okay, and pivoting is

Â basically the name of the local move in linear programming but that's basically

Â choosing this entering variable, choosing this leaving variable.

Â And performing, you know, Gaussian elimination.

Â So the local move is called pivoting and basically swap one basic variable and a

Â non basic variable, okay? So at this point we have seen a lot of

Â thing already, okay, so if you want to solve this linear problem, we know that

Â this solution is at the vertex, you know that the vertex is a basic feasible

Â solution. And now we know how to move from a basic

Â feasible solution to another basic feasible solution so we can look at the

Â four points, you know, when do we need to stop?

Â Okay. And we need to stop, you know, when you

Â see the basic feasible solution is optimal.

Â And I'm going to give you very, very simple criteria to actually detect that.

Â And this is the criteria, right? So we have this basic feasible solution.

Â We have these equations, right? And these equation are expressing the

Â basic variables in terms of the non-basic variables.

Â Okay, take the objective. Okay, take the objective, remove all the

Â basic variables which basically mean replace them with the non-basic variables

Â and you get an expression of the form you know, 'c' zero which is going to be a

Â constant plus 'c' one is one, well this is a coefficient of you know the variable

Â 'x' one plus 'c' two, 'x' two and so on when I have basically replaced all the

Â basic variables way off, you know on the right hand side, the non-basic variables.

Â The expression with the non basic rivals, okay.

Â Now, if I have an expression like this, okay, and all the CIs are strictly, are

Â greater or equal to 0 then I know that I'm optimum, so I have a very, very, very

Â very simple way for actually detecting that I'm optimum.

Â I'll rewrite the objective function, remove all the basic variables, look at

Â the expression, which is only in terms... Of the non-basic variables, and if all

Â the coefficients there are greater or equal to zero, I'm done.

Â Okay? I'm at the top of the world, okay?

Â Now, I'm going to give you a little intuition why this is true, a little bit.

Â Or actually, the inverse. But, but we're going to go through some

Â examples so that you can, I can convey that intuition, right?

Â So we had this beautiful problem that we were trying to solve for a long time.

Â So we have variable x1 for x5. You have inequality, equalities here and

Â so the first thing of use is to find a basic feasible solution.

Â That's what we find here. Okay.

Â Now look, When I take the basic variables here, right?

Â So x three to x five, okay, remove them from the objective functions there.

Â This is the objective function that I get.

Â Okay? As you see, the constant is six.

Â Why? Because x three, x four and x five, you

Â know, I assigned one through three and there are the coefficients of one at the

Â top. Okay?

Â So I know that I will have, you know, the value six.

Â But then I give a minus you know, 3x1 and then a minus 3x2 okay?

Â Now this is not an optimal solution right?

Â Because these guys are negative okay, they are not, non negative the

Â coefficients of you know, x1 and x2. So what is the intuition here?

Â Well, you know that these guys x1 and x2 okay they have to take non negative value

Â okay? So currently they are not in the business

Â right so their value is what? Zero, right.

Â OK since they are zero you know may wonder oh but if I put them in the bases

Â there going to get a positive value, a nonnegative value and therefore I can

Â drive this objective function down because you know a negative times a

Â positive value is going to decrease this objective function and that's the

Â intuition here. If they were positive you know they had

Â to take a positive value this objective function is going to go up.

Â Okay, so essentially they are negative. That means that I can drive this

Â objective function down, okay, so this is what you have there.

Â You know, we have these objective functions, we select x2, which is a

Â negative coefficient, and we want this x2 to enter the basis, okay, so how do we do

Â that, we have to select a variable to leave the basis, you know, once again we

Â look at the coefficient 1 divided by 2... Okay and then three, three divided by by

Â three, so that's one. So we're going to basically use the first

Â constraints to remove to make x two inside the bases, removing x three from

Â the bases and this is what you obtain. Okay?

Â At this point this is the new basic feasible solution.

Â And this basic feasible solution is the value of nine and a half, okay, so which

Â is four point five, so it's going to then six with decreased value of the objective

Â function. But the now the two co, the two

Â coefficients that you see are positive which basically means that at this point

Â this is the optimal solution. There is no way I can drive the objective

Â function down. I have an optimal solution to my problem.

Â Okay, and I can detect that and just look at this line and all the coefficients are

Â going to the zero, great. >> I am, I'm optimum, okay, so this is

Â basically the idea. So you go, you go, you move from basics

Â to basics, select this entering variable, you know, select this leading variable,

Â performing the pivoting operation and look at your function though.

Â Is it all positive? If it's all positive you're down, okay.

Â So that's the basic idea... Okay, so in a sense what we have seen at

Â this point are basically the first four points.

Â Okay, so we have seen, you know, the fact that an optimal solution is located at a

Â vertex, okay. A vertex is a basic feasible solution.

Â I can move from BFS's to BFS's, okay. And I can detect that BFS is optimal.

Â I haven't shown you the proof yet, okay, I will show you in the next lecture

Â because it's much easier. To actually prove this thing, if I can

Â use Matrix notation. But it's a beautiful single proof like

Â five lines if I can use matrix notation. Which I'm not doing here because it's

Â simpler. Okay?

Â So now, the only thing that I have to tell you is that from any kind of BFS, I

Â can move to another BFS which has a better cust, okay?

Â And that's the case, okay? Provided that I put some conditions,

Â okay? Once again, I want to prove this but

Â this, there is a simple algebraic proof to do that but the basic idea is the

Â following, okay? So if all the bi's, okay, I'm assuming

Â that all the bi's are strictly greater than 0, okay, that I can find an entering

Â variable. So there is an entering variable

Â somewhere in the constraints with the negative coefficient.

Â I can find the leaving variables, okay, then the pivoting operation is going to

Â give me a new BFS. Which is improving.

Â Great, I am improving the solution. Okay, and so once I have that, you know,

Â this is the only, so I'm ready in a sense to tell you what the simplex algorithm

Â is, and this basically these five lines that you see here, these four lines that

Â you see there, okay. So what you do is, as long there is a

Â coefficient in the objective function, which is, which is negative, okay, I need

Â to perform a pivot operation. Okay, so I select an entering variable,

Â that's a variable which has a negative, it has to have a negative coefficient in

Â the objective function. That's how I can drive this objective

Â function down, okay, then I select a leaving variable, okay, such that, such

Â that, I preserve visibility, so I have to use the rule that I've shown you before,

Â and then I basically pivot... Okay?

Â So once again choosing and entering variables and in the centering variables,

Â how you have to be a little bit careful, right?

Â I want to take one, which is a negative coefficient in the objective function.

Â Why? Because I want this objective function to

Â have only positive coefficient. Right?

Â I select the centering variable, I select the leaving variable, I pivot and I keep

Â doing that until the objective function is going to be all, you know, all

Â positive coefficients. Okay?

Â So that 's the Simplex Algorithm. And if I guarantee that during the

Â execution, these B I stays strictly positive, okay?

Â It can't, I don't want them to fall to zero, and we'll discuss that in a moment.

Â And if the, the, the objective function is bounded by below.

Â Okay, so you can't drop, you cannot drive it down infinitely low.

Â Okay. Then you are guaranteed that the simplex

Â algorithm is going to terminate, and it will give you an ultimate solution.

Â So it's going to convert with, a bfs where all these cis are going to be

Â greater than zero. Okay?

Â So that's the simplex algorithm. Okay?

Â Now, there are a couple of nasty cases that I need to discuss with you, and

Â that's what I'm going to do in the next couple of slides.

Â Okay? Now the first one is this one, right?

Â So I told you that, you know, knowing we are choosing the entering variable using

Â a very interesting criteria. The fact that the objective function,

Â okay, has a coefficient which is negative for that variable.

Â Okay, so that's what I told you. We want to do that to convert to an

Â optimal solution. But, you know, at the beginning of the

Â lecture, what I was telling you is that we select the entering variable with,

Â because it has a negative coefficient inside some of these constraints.

Â So, it may happen that the variable that I'm choosing there has a beautiful

Â column, okay? Where essentially, all the coefficients

Â are greater than 0. What does that mean?

Â None of them is negative, so really the two things that I've told you in this

Â lecture are contradictory. I want to introduce the variable into the

Â basis to drive the objective function down, but at the same time, I also want a

Â negative coefficient and I don't have one.

Â What is happening? What is happening?

Â Okay? So think about it.

Â So what is the basic feasible solution here, okay?

Â It takes, you know, the basic variables and assigns them the value, in this

Â particular case one, two, three, okay? This is the objective function that I

Â have. What is the value of X one in the basic

Â feasible solution? It's zero.

Â Alright? But you see these coefficents, okay?

Â They are all greater than zero. Okay?

Â What does that mean? Can you think about what that means?

Â So can you think of the case where I say, you know, in the basic feasible solution

Â X one okay is basically assigned to zero. What if I assign it to 10,000?

Â What am I going to get, okay? Well, this equation is going to be fine,

Â right? So x2 is equal to 1, if I give 10,000 to

Â this guy, you know, multiplying by 3 that's going to be 30,000.

Â You know, I'm going to get the value for x3 which is going to be 30,000 and whah,

Â okay? That's fine, that's still feasible.

Â What about the second one? Same thing right?

Â This is positive. I'm going to add a positive number to x4

Â and that's fine. And same thing for x5.

Â So if I take, you know, 10,000, I have another solution, okay.

Â It's not a basic feasible solution, but it's a solution right.

Â What about if I give a million to x1? That's fine as well right.

Â If I give 10 million it's going to be fine.

Â What do these values do to the objective function?

Â They are driving this objective function down.

Â Why? Because this guy is a negative

Â coefficient. So what this really means here is that I

Â can take the value of x1 and giving arbitrarily large positive values, and

Â that's going to make the objective function arbitrarily large.

Â So which basically means that my algorithm here is not bounded by below.

Â I can drive the objective function as low as I want, okay?

Â Which is one of the hypotheses that I told you that I needed to have for

Â actually terminating. So in a sense what I'm telling you is

Â that the simplest algorithm is not bounded by below.

Â You will come to a situation where you see this volume you want to put inside

Â the basics but you cannot. And so that's the case where you, you can

Â jet, you can detect that your product is actually unbonded.

Â And you can stop executing at this point. You can report to the user that, you

Â know, that you can drive this optimization function arbitrarily, you

Â know down, okay, low. And typically that means that there is a

Â mistake in the modeling. Right?

Â So because if you think of a factory which is trying to decrease its costs.

Â And the solution of this complex algorithm is basically telling the

Â manager you can drive your cost arbitrarily low.

Â Something should be fundamentally wrong. Now if you maximize you can drive it

Â arbitrarily high. >> Its like you know telling the CEO of

Â Apple you can maximize your profit arbitrarily high That's probably not

Â right, okay? So essentially, you can detect that the

Â problem is unbounded and make sure that the user actually knows that.

Â They have to look at their model and try to do it better.

Â Now, there is the other things that I mentioned is that one of the hypothesis

Â that I told you about is that I don't want the bi's to become zero.

Â What happens if a bi happen's to be zero, right?

Â So look at these constraints here. I'm basically letting x3 be equal to 0.

Â That's fine. This is a basic feasible solution.

Â It's just that there is a basic variable known and that is the value zero.

Â The known basic variable views the other value zero.

Â But now what happens if I select a variable to enter the basis?

Â Remember the pivoting rule, you know, computes as a ratio.

Â I know I have a ratio zero divided by something is always going to be zero.

Â That value is always going to be the smallest one.

Â So you always select that constraint to enter the basics, which basically means.

Â That when you do the pivot thing, okay. So you going to stay at the same value of

Â the objective function. Because x3 you know, is going to be

Â kicked out of the basis, okay. So you, you, nothing happens there.

Â But x2 is going to basically enter the basis but get a value zero obviously.

Â And therefore, the objective function is basically not changing.

Â The objective value is not changing, okay?

Â So what does that mean? That means that the hypothesis that I

Â told you before, okay, is wrong. Well valid, it's not valid, it's not

Â satisfied and I have to find essentially another way to guarantee termination.

Â So one of the things that I told you about this five facts, okay So is wrong.

Â Its not always possible to move from a basic feasible solution to another basic

Â feasible solution with the better cost. I can actually move to another you know

Â another top of the mountain which is exactly the same height.

Â OK. So what do we do?

Â OK so we have to use, we have to be a little bit careful in how we implement

Â the algorithm and there are very useful ways to do that.

Â I, I will mention that two of them in a little bit of details and they are all at

Â once, okay, but you can use essentially a selection of the variables that you want

Â to make enter the basis in a very specific way and that will garauntee your

Â race termination, okay, and so the key point is that you look at the objective

Â function. If you only select the first variable

Â which is a negative sign, okay, negative coefficient, then you are going to

Â terminate... But sometimes you really don't want to do

Â that. For instance a lot of the, a lot of the

Â simplex algorithm will take a value which coefficient, which is highly negative.

Â Because if you give a positive value to that particular variable it's going to

Â drive this, you know, down very quickly, okay.

Â So this is somewhat restrictive. This is, this is putting some limitation.

Â To what you can do. The other thing that you can do is do a

Â pivoting rule, which is a little bit more complex.

Â Instead of choosing the ratio that I've shown you, you can actually see, you can

Â actually generalize it to a lexicographic order.

Â So if the, if the, if there are ties when you do this ratio You move to the next

Â column, you know, the first coefficient, and you do the same thing and so on and

Â so forth and if you do that once again. If you do, if you select the leaving

Â variable in that particular way you will also guarantee termination, so this is

Â interesting, right, there is somewhat of a duality and it's interesting to talk

Â about duality for linear programming... But you can, you can put a role you know

Â which select the entering variable rule that select the leaving variables, and

Â both of them would guarantee termination. And then you know typical implementation

Â of also methods for perturbing the basis, so you change a little bit the problem

Â and then you go back to that later on. But you basically make sure that, you

Â know, you you don't have this. You don't stay the same place when you

Â are doing these pivoting operations. Okay.

Â So I won't go into the details but there are ways to actually address them and,

Â you know, actual implementation will combine these with better pivoting rule

Â to actually drive the search quickly. But we can make this algorithm termanate

Â even if the bis are actually. getting to zeroes.

Â Okay, so, we know when the, so, so the two things that I've shown you is that we

Â know or we can predict that the problem is unbounded, that's one of the

Â hypothesis that I did and we can also detect that, we can also insure

Â termination if the BI's are actually getting to zero by using this more

Â clever, you know, People think selections.

Â So there is one more thing I need to do before we know exactly how to implement

Â the algorithm and this is actually really beautiful.

Â This is really clever. So essentially What I've been doing the

Â entire lecture, I've always assumed that we had a basic feasible solution to start

Â with, right. But in practice, you won't necessarily

Â have that, yeah. The user is basically giving you this set

Â of equations, okay, and these objective function You have actually no clue that

Â these equations are actually, you know, feasible.

Â Okay, now there is a feasible solution to these equations.

Â So, so the problem is the how do I find my first BFS?

Â Okay? I may not have it.

Â And it may not be a obvious, obvious how to actually get it, okay?

Â So what I'm going to do, is I'm going to do a very simple trace, right?

Â I'm going to add a new set of variables, okay?

Â Just step out here so that you can see them, okay?

Â But I'm adding one. We call artificial variables, so y1, yi,

Â you know, ym, are artificial variables. They are new variables that are just

Â created out of nowhere, right? And I put them into the equations of the

Â simplex algorithm, okay? Now, I put them in a very, you know,

Â simple way, right? So, y1 for the first constraint, y2 for

Â the second one, yi for the i n. For constraint i, and ym for the last

Â one. So now, essentially, when you see this

Â thing, you are like, smiling, right? Because now I have an easy BFS, right?

Â So this is essentially a BFS. This is essentially a, a, this, I can use

Â these variables to put inside a basis, and I have a BFS, okay?

Â But obviously, you know it's a BFS for another problem.

Â A problem where, you know the y's are actually the variables.

Â So now I have an obvious solution which is assigning these y's to the other ones,

Â which basically means that all of the other guys are zero.

Â But that's not a feasible solution to my initial problem, right?

Â Because I wanted these guys to be equal to the BI's okay so what am I going to

Â do? Okay, and this is where the beauty is

Â Right? So the beauty is so so the beauty here is

Â that we are going to the simplest algorithm to actually find the real BFS

Â from this fake BFS. Okay?

Â Okay? So this is what we are going to do so so

Â so this is called a two faced method. Okay.

Â And the first step is going to be, I want to find the BFS if there is one, okay?

Â And then the second step is, once I have one of these BFS, I'm great, right?

Â So I can apply the simplex algorithm for optimizing.

Â But the first one, I want, I will use essentially the simplex algorithm to find

Â a basic feasible solution. How do I do that?

Â Well, it's very simple, right? So, if I have a feasible solution to this

Â set of equation, that means that the y variable should be assigned to 0, OK?

Â So what I'm going to do is basically minimize the sum of these y's.

Â And remember, these y's have to be, you know, they are greater than or equal to

Â 0, so If, if essentially this function goes to 0, it means that every 1 of them

Â is equal to 0 and therefore these, these values here have the value 0 and that

Â means that I have a feasible solution to this particular set of contraints.

Â Okay, so what I do? No but this is beautiful, right.

Â Why is it beautiful? Because I have this basis here.

Â I love this basis, right. I have this basis, I have all this

Â variables that I can put inside you know as basic variables and then I have an

Â offical. Then I can optimize this thing [SOUND],

Â and then I look at what the objective function is, if it's greater than zero I

Â know that I don't have a feasible solution to this, to this constraint.

Â So an initial set of constraints is wrong, okay?

Â It's basically not feasible. So you go back to the user and say, you

Â know, please give me a problem with this constraint that satisfies because

Â otherwise, you know there's little purpose to actually optimize Okay?

Â But if these values, see if the objective function is 0, that means that this set

Â of constraint is feasible, and now, I'm ready to do the optimization.

Â Right? Because at this point I have a basic

Â feasible solution. Okay?

Â And this basic, basic feasible solution is in the, in terms of the other

Â variables. Right?

Â So So, in a sense, there you can take the basic feasible solution.

Â We move up, you know, this, this objective, you know, function, use the

Â original objective function. Remove the non basic variable, the basic

Â variables from these active function, and start again the simplex algorithm using

Â that new objective function, which is the one you wanted to optimize in the first

Â place. So, all the things that I told you are

Â almost right, okay? So, they only I put it as I told you,

Â when we complete this optimization, we have a basic feasible solution, where the

Â basic variable are all the varibale, which are, you know, the original

Â varibles of the problem. They are not the basic variable.

Â That's not entirely true. It can be the case that one of these

Â basic values, these, these artificial variables in, in the basis.

Â But no you just need to think about it. If one of these variables is inside the

Â basis, what does it mean? What is the value of that artificial

Â variable, okay? Well, we know that the objective function

Â is zero, okay? So that variable has to be zero, okay?

Â So, at that point, what you can do is transform this, you know, remove it from

Â the basis, introduce someone else inside the basis.

Â And now you will have really a basic feasible solution expressed only in terms

Â of the basic variables and then you can solve the optimization phase, okay?

Â So this is it, you know, you have seen the entire simplex algorithm now, okay?

Â You have a two-phase method. First, find the BFS and then optimize.

Â How do you optimize, okay? Basically, what you do is you move from

Â basic solution to basic solution, okay? You can detect that you're already at the

Â end when all the coefficients of the variables, okay, in the objective

Â functions are greater than zero, okay, and every time the BIs are strictly

Â positive essentially what you do is you decrease the value of this objective

Â function. If you use one of the rules that I told

Â you you are garaunteed to terminate, okay.

Â Beautiful algorithm, local search, garaunteed to find the optimal

Â solution... Thank You. [BLANK_AUDIO]

Â