So, discrete optimization, again, linear programming part two.

Okay, so this is getting really exciting. Okay, so what we saw last time was the

geometric view and linear programming and what we going to do now is do the

connection between this geometric view and the algebraic view.

Okay, so we going to link the two and this is where this thing gets really

nice. Okay, so remember what we saw last time,

is that we could solve this problem from a, from a geometry standpoint by simply

looking at all the vertices, and taking the one which is the best value for the

objective function, okay? Of course you know it's like, oh, how do

I get these vertices? You know it's like, hey it's crazy right,

writing an algorithm to get these vertices.

doesn't seem so simple. Okay.

We can do that by hand. But how do we do that, you know, on a

computer? And that's the key point here.

Okay? So the simplex algorithm is essentially

a, a, an intelligent way of actually exploring these vertices.

And as I told you, you know, this is where the connection between the geometry

and the algebraic view, the, is, is really interesting.

Okay. So once again, I told you that, you know,

this algorithm was invented by Charles Dantzig.

And one of the things you have to understand is that this, this algorithm

is actually still, after, you know, like, 60, 65 years, actually, wow, this is 65

years. After 65 years it's still a complete

enigma, right. It works incredibly well in practice,

always solving this problem really fast in general.

But it does an exponential worse case so in the worst case it's terrible right.

But this worst case doesn't seem to happen in almost never okay.

So you have to you can construct an example where it can happen but it don't

seem to happen in practice. So from a theoretical standpoint, nobody

really understands why this is the case. And they are really some very nice

theoretical issue, which are very simply state, that nobody knows the answer to.

Okay? So very interesting algorithm.

And so what I'm going to do today is to actually Try to present this thing in an

intuitive way, what the simplex algorithm do.

And so this is, this is the wa-, this is the way I would, you know, I could

characterize this algorithm. Okay.

It's very simple, right? So what you want to do is to be on top of

the world. Okay.

So that's the goal, okay? The goal is to go, to be on top of the

world. Now, I'm going to give you five facts,

okay, that will allow you to go, to be on top of the world, okay?

The first fact that I'm going to give you is that the top of the world is the top

of a mountain, okay? Basic fact, right?

No, the string in fact, is the most interesting one, okay?

So the top of the mountain is a beautiful, fantastic spot, okay?

We are going to be talking about beautiful, fantastic spot all the time.

And I call this guy BFS, okay? So, beautiful, fantastic spot, okay?

So you have to to know, okay, that the top of the world is at the top of a

mountain, and the top of a mountain is a beautiful fantastic spot.

Okay? Now, the third thing that I'm going to

tell you, okay, the third fact, is that you can move from a beautiful fantastic

point to a neighboring beautiful fantastic spot.

That's easy to do. Okay, you are at the beautiful fantastic

point. You see another one.

You can move there. Okay.

So, and then the fourth fact is that, and this is important, when you are at the

top of the world, you know you are at the top of the world.

Okay, fact four, okay. And then the last fact is that if you

aren't a Beautiful Fantastic Spot, there will always be an opportunity to move to

a higher Beautiful Fantastic Spot. Okay?

So if you understand these five points, okay, so you understand linear

programming. Okay?

So this is as simple as that. Okay?

You know that the top of the world is a top of a mountain.

You know that the top of a mountain is a Beautiful Fantastic Spot.

That you can move from Beautiful Fantastic Spot to another neighboring

Beautiful Fantastic Point. When you are on top of the world, you

know it, okay. And then, every time you are at a

beautiful, fantastic spot, there is a way for you to go to a more, to a higher, you

know, beautiful, fantastic spot. Now, so this is too simple, right, and,

and we have to make it complicated. And that's what we're going to, that's

what I'm going to do, okay. I'm going to do on the next slide, okay.

I'm going to take these facts and make them a little bit more complicated, okay.

Because otherwise, people will say, what you do in science is too easy.

I mean, we're not the only one to do that right?

So you know, a doctor would talk abut the distant part of your fibula.

What is that, right? So can't they say, this is the part of

your fibula next to your ankle? No.

They would say, beautifully, you know the distant part of your fibula is the

problem. Right?

Anyway. So, lawyers sound the same right?

So when you buy a hose in the US they would say, they would write things like

expected performance. And you say, wow, what is this expected

performance? That just means you have to buy the host,

okay? But, you know, so we going to do the

same. We're going to take this very simple

algorithm and make it look complicated, such that people believe we are really

clever. Okay?

So this is what we're going to do, okay? So, we, we, instead of saying we want to

be one top of the world, we want to solve a linear program.

Okay? Fact number one, you know, what I told

you is that the top of the world is at the top of the mountain.

That's basically telling you that the optimal solution is located to the

vertex. So when you think of a vertex, think the

top of a mountain. Right?

Now, a vertex, okay, is actually a beautiful, fantastic spot but we can't

talk about beautiful, fantastic spot so we going to say it's a basic feasible

solution. Okay, but let's say BFS, you know, you

can think BFS, basic feasible solution or beautiful fantastic spot.

Okay, and then the three other facts are going to be essentially the same, because

we are talking about BFSs, right? So we can move from a BFS to another one.

Beautiful fantastic boat to a beautiful fantastic boat, a basic feasible solution

to another basic feasible solution. You know that you can detect if a

beautiful fantastic spot is at the top of the world, okay.

you, you can detect it so basic feasible solution is optimal.

And then you can move from any BFS to a not a BFS which has a better cost.

Okay? Well, smaller cost if you minimize, you

know, higher cost if you maximize. Okay.

So this is once again, a little bit more complicated now.

We are talking about this basic feasible solution, that we don't really know what

they are. But this is the same thing as what I've

shown you on the previous slide, okay? But this is the simplex algorithm.

So, what I'm going to do in the lecture is going to, through these various facts

and telling, and explaining how they work.

Okay? So, we know what a linear program is,

right? So, minimizing this linear function and

then these constraints. Okay?

And all the variables have to be graded on 0.

We'll come back to that, this is an important part, okay?

So, so let's start, let's back up a little bit.

Okay, I'm back, backing up, okay, right? And so, what we are looking at here,

first. We have a system of linear equation and

we want to solve it. How do we do that?

How do we do that, okay. Go back to high school.

That's what we did, right? And so essentially what you do is you

perform you know, Gaussian elimination to actually solve a system of linear

equations like that, right? So you basically express some of the

variables in terms of the other ones, okay.

And so you basically isolate a bunch of variables, x one to x n and you express

them in terms of the other variables. And so this is basically what we call a

basic solution. The variable on the left sides are going

to be called the basic variables. Sometimes we tell them they are the basic

Right? So, but they are the basic variable.

Okay? And we going to love these guys.

Okay? You'll see why.

And the, and the other variables, I call the non basic variables.

Okay? They are the guys that we don't really

care about. Okay?

And then of course you have the bs which are the values that we're going to give

to the basic variables. Okay?

So, this is a basic solution. Why?

Because essentially we're going to take the basic variables that you see there,

we're going to give them as values, the b's.

Okay? And then we're going to take the basic

variable, the nonbasic variables and we're going to give them all zeros, okay?

And if we do that obviously these equations are going to be satisfied,

right? I'm basically saying xi is equal to b, x1

is equal to b1, you know, and xm is equal to bm and all these guys are zeros so

these equations are obviously satisfied. Okay?

So this is a basic solution. Okay?

So I'm basically taking all the xi giving them the bi.

I'm taking out all the non-basic variables, assigning them to zero.

So in a sense, think about this. In this is some of equation, there are

bunch of variables that are all zeros and that happens all the time in linear

programming, right? So you will have a massive amount of

variables typically assigned to zero and then these beautiful basic variables are

going to be assigned the bi's that you see there, okay?

So that's a basic solution, okay? We'll talk about basic solution.

Now remember a beautiful fantastic post and this F in there.

Right. So this is feasible or fantastic okay.

So you going to be fantastic or feasible when you satisfy these constraints, okay.

So you have to make sure that all the variables.

All the variables here have to be greater than 0 okay.

And so how do you that? Well, you know for the non-basic

variables it's easy, right? So all these guys are already zero.

But you want to make sure that these guys are assigned non-negative values, okay?

And so that means testing when you actually isolate the X1 to XN, that these

guys are actually non-negative, okay? And if they are non-negative, what you

have is a beautiful fantastic spot. You have a basic feasible solution.

Okay? So that's what you have here.

Okay? So a basic feasible solution is a

solution where you isolate some of the variables, x 1 to x m.

It can be any one of these variables, right?

So they are called the basic variables. They are assigned these bs that, you

know, that you have, and all of the non-basic variables which are there are

assigned to 0. Okay, basic feasible solution, okay.

You're going to say, why this, this guy actually obsessed with this and you'll

see why in a moment, okay. So, how do we find a beautiful, fantastic

spot? We select environments, okay.

So there will be the basic feasible, the basic variables.

We re express them in terms of the other ones, okay.

Not in terms of each other, right. Only in terms of the non basic variables,

okay. We can do that easily by performing

Gaussian elimination, okay. And if all the non, if all the b's are

nonnegative, we know that we have a basic feasible solution.

Okay. So, you're going to tell me yeah, yeah,

yeah, yeah, yeah. But you are dealing with equations now.

But in my simplex algorithm, I mean inequalities, right?

So what are we going to do? Well, you know there is a very simple

thing that we can do. Okay?

If you have a set of inequalities like this, okay?

What you can do is always you can add some new variable okay?

From S1 to SM, okay? And transform this inequalities into,

into equations. okay?

So this adds one to the left and it has to be greater or equal to zero, so you

add them and there is one for everyone on of the contraints.

They are all forced to be greater than zero and we call these guys the slack

variables. And the intuition here is very simple,

look at the expression you have there. They will have a value when you assign

the basic variables, and then there will be the value of the b's.

Okay. So typically they have to be smaller or

equal to that. So they may, so either they are equal, in

which case the slack variable is going to be zero, otherwise the slack variable is

going to be the difference between the b and then the, you know the left hand

side. Okay.

So essentially they are making, they are transforming this inequality, okay.

They are transforming these inequalities into equations by picking up the slack,

and really having a variable capturing that slack, okay?

So, very simple, right? So if you have a long, if you have a

linear program with inequalities, you can transform these inequalities to

equations, and that's what we want to do, when we are actually implementing this

method on a computer. Right?

Okay? So, once again, to summarize, you start

with your linear programs, you take all these inequalities, okay?

You have slack variables and you have equations, and now you select m

variables, so these are going to be the basic variables.

You re-express them in terms of the non-basic variables, performing Gaussian

elimination and then if all the b's are non-negative you have a basic feasible

solution, you have a beautiful fantastic spot, okay?

So, so, now, this is the key point. Okay.

This is the most important fact that you need to know.

A vertex, in the geometrical sense that I've shown you in the last lecture, is

actually a beautiful, fantastic spot. It's a basic feasible solution.

Okay. So in a sense now, we have a beautiful

algorithm. I'm, I'm giving you an algorithm, right,

that you can actually code in a computer. Right?

So the naive algorithm that we have now is that we just want to look at all the

vertices. Okay.

But the vertices are basically a basic feasible solution.

So you can just enumerate all these basic feasible solution, and then take the one

which is best. So you generate all the basic feasible

solution. How do you do that?

Well that's just what I explained to you, right?

You isolate end variables. You express them in terms of the

non-basic variables, okay? And then you perform Gaussian elimination

and then you can test if this is feasible.

If it is feasible, you have a basic feasible solution, okay?

So I have noticed, you know, naive algorithm which you can implement on a

computer which will generate all these basic feasible solutions and for everyone

on of them you can computer a value of the objective function.

You essnetially plug the values that are in that solution Okay?

And then you select the one which has the best cost.

Now how many of these? Well, that's the issue, right?

So they will be a very large number of them, right?

It's like picking up m variables inside the m, and you want to explore all these

combinations. Okay?

So there are many of them. Okay?

And so, in practice, the naive algorithm that I'm showing you here Is not going to

work you know, nicely from a computational standpoint, from a

performance standpoint. But we'll see how to improve it, you

know, in the next lecture. Okay?

So, basically summarizing what we have seen.

Okay? So, what you want to do.

Okay? Is to be on top of the world.

Is basically a vertex. Okay?

And what you can do To get to. And a vertex is a basic feasible

solution. And getting a basic feasible solution is

actually pretty easy. It basically consists in isolated end

variables, okay, expressing them in terms of the other one, and testing the value

that you assigned to them is actually not negative.

Okay? We'll see a more clever way of actually

exploring all these vertices, basic feasible solutions You know next time.

Thank you very much.