1:56

immediately, they tell us the value of the objective function. And not only that,

Â they tell us what, how many barrels of beer and ale to make cuz we're sitting at

Â point where we have our A and B A and B in the basis. And the, and the therefore, the

Â slack variables are zero so it tell us immediately that we better make 28 barrels

Â of beer and twelve barrels of ale. so that's the idea. we just work with the

Â elements in this tableau. And then, all we're going to do is write the code to

Â build the tableau and then to do the pivots. and this is very straightforward

Â Java code so it's going to be 2-dimensional array, clearly. we have our

Â number of constraints and our number of variables. so and so we'll write a

Â constructor that takes the, the constraints. and the, then the array of

Â five for the constraints and the coefficients for the object ive function

Â as argument. so it picks out m and n, and so how big an array do we need? m + one

Â n + n + one. So we just make that array and just fill

Â it in from our argument. Fill in first the coefficients and then put the initial

Â values of the slack variables along the diagonal here. Java rays are initialized

Â to zero so we're going to have to initialize all of the others. and then,

Â you put the objective function in the right-hand side of the constraint

Â equations into the tableau simple as that. So, every one of those lines of that code

Â is completely transparent so that's building the initial, initial values of

Â the tableau. so now we have to find the column that is going to, we're going to

Â pivot on. So, we're going to need to pivot on some entry in the array. It's going

Â have to be in row p and column q in the rule we've given, which is called Bland's

Â Rule. Remember, Bland is the one who wrote the classic Scientific American article

Â about beer. So, we use Bland's Rule to just go through and look at the last row,

Â which is the objective function and look for a positive. So that's all it does, it

Â just goes through and looks for positive. if it finds positive it returns that

Â column, and that's what we're going to use. so it's the first one that's

Â positive. If you line them up positive, it always returns the first one. and if it

Â can't find one, then it's optimal we can stop. So that's the entering column. we

Â also have to find the entering row. and to find the entering row, we use the minimum

Â ratio rule. so we just go through for all of possible rows we just do this

Â calculation involving the ratio between the current element and the element that

Â we would pivot on. we only worry about the positive ones and so, it's just a little

Â check. And again, if there's a tie, you choose the first one and a simple

Â calculation going through all the entries to figure out which row gives you the

Â means ratio and then return p. so now, we have p and q where we want to pivot.

Â 5:54

that's in, and so we have the values p and q and we just go throu gh and do the

Â pivot. and then, this is the same calculation that you do for Gaussian

Â elimination. scale everything, zero out in q, and then and then scale the row And

Â then, I'm not going to do the details of that calculation. that's what you get when

Â you solve for one variable, substituting into the other equations. It's the same

Â calculation as for when you're solving for simultaneous equations. and that's it. the

Â simplex algorithm is just until you get a -one return from Bland, you just compute

Â q, you compute p and pivot. And, and keep going and going until you're done. And

Â then, to look at your maximized value, just look in the corner. not too much code

Â to implement the bare bones implementation of simplex. It's a fairly straightforward

Â algorithm. Now, what's remarkable about it is, remember, in multiple dimensions the

Â number of points in the simplex could be high, could be exponential. But what's

Â remarkable about this simplex algorithm, and still today, even somewhat mysterious,

Â but certainly very mysterious for the first several decades that it was used, is

Â that it seems to find the answer after just a linear number of pivots. this, all

Â those points out there on the simplex. but when Delta Airlines is scheduling it's

Â airline pilots or another company is looking for a way to drill or whatever,

Â Walmart's trying to move its trucks around. these huge problems with millions

Â of variables it's linear time. It could be exponential time, but it's linear time.

Â That's why it's been so effective. now, you could use other pivoting rules and

Â there are certainly been a huge amount of research on trying to figure out exactly

Â what to do. nobody knows a pivot rule that'll guarantee that the algorithm is

Â fast, it runs in polynomial time. most of them are known to be exponential in the

Â worst case, but the real, again, the real-world problems maybe aren't worst

Â case. and certainly people have looked at all different ways to look at it to try to

Â understand why this simplex algorithm usually takes polynomial time. this is a,

Â a recen t paper on the topic. now there are things that can happen during the

Â execution of the algorithm that we've glossed over. one, one thing is that you

Â could get stuck in that you could have a new basis. You could make this

Â substitution but still wind up at the same extreme point. Like if a line, a bunch of

Â lines intersect at the same extreme point. so that's called stalling. and that would

Â be that, that's not a good situation that you have to watch out for. and the other

Â thing is when you have this kind of degeneracy, you could, you could get in a

Â cycle where you go through different basis always corresponding the same extreme

Â point. It doesn't seem to occur in practice in the way that we implement it

Â and the way that anyone would implement it. Just choosing the first one, that's

Â Bland's rule is going to guarantee that at least you, you don't get caught in an

Â infinite loop which would be bad. That doesn't seem to happen anyway. But it's a

Â problem to think about. there's a number of other things that have to be thought

Â about in order to actually try to use this to schedule airlines or whatever. so you,

Â you, you, you certainly have to be careful to avoid stalling. one thing is that there

Â tend to be, most of the equations tend to involved relatively few variables. and

Â people who have experience with calculating these systems of equations

Â know that in some situations you can find yourself filled up with lots of non-zero

Â values. so you want to take a, a rule that meant implements of things with sparse

Â matrix with, so your amount of space you take is proportional to the number of

Â non-zero entries. so, and we, we looked a couple of times at some data structures of

Â that sort, and that's certainly a consideration in simplex implementations.

Â the other thing that we haven't talked about much in this course, but whenever

Â you're working with floating point approximations of real numbers, and you're

Â making a lot of calculations you have to make sure that you have control over the

Â errors in the calculations. and scientific compu tation is certainly concerned with

Â keeping control of that situation and something that we haven't talked about

Â much in this course. and the other thing is that actually it could be the case that

Â your constraints are unfeasible, there's no way to satisfy them. and it's a bad

Â idea to run simplex for unfeasible situation cuz it won't ever get to the

Â optimum. So typically, what we have to do in practice is run a different simplex

Â algorithm on con transformed version of the problem. And there's theory, really

Â interesting theory behind that all. and it's, you use the same code to figure out

Â in-feasibility, but something has to be done. and the other possibility is there

Â aren't enough constraints. so that you missed with one of those half-planes. You

Â missed somewhere, and then it gets infinite. and that's also a situation that

Â you want to avoid. And with the code that we gave, that would be the case where you

Â use Bland's Rule to find a column, and. You're looking for a row and there's no

Â row, so you have to check for that as well. and each one of these things is

Â complicate, complicated enough that in this case we'd probably say the best

Â practices. don't, don't go in and try to implement a simplex unless you're prepared

Â to devote a really substantial amount of time to it. and there's really, no need to

Â do that nowadays because basic implementations of simplex are readily

Â available in many different programming environments. and routinely, nowadays,

Â people solve LPs with millions of variables airline and they're so easy to

Â reduce practical problems to an AP, and LP. Just throw in another constraint. Any

Â constraints that you might have a certain pilot doesn't want to be scheduled early

Â in the day, or late in the day, or union rules, or whatever. You just put all these

Â things in as constraints without worrying that much about it. And then, add more

Â variables, whatever you need to do. and then send it to an industrial strength

Â solver and people nowadays solve LPs with huge numbers of variables. And not only

Â are there solv ers. Nowadays, there's modeling languages that

Â make it even easier to write down the LP. and so, there's plenty of resources

Â available to make use of the simplex algorithm nowadays. and it's important.

Â This is a really, you know, important quote from just a few, few years ago

Â that's worth reviewing. so, they have this is so widely used. They have benchmarks on

Â how well linear programming implementations do. So, even in 1988, it's

Â not all that far back, they have a benchmark that would have taken 82 years

Â to solve and even that's just a guess. and using the best linear programming

Â algorithms of the day would take 82 years. Just fifteen years later, you could solve

Â the same problem in one minute, that's a factor of 43 million. of this, factor of

Â 1000 was due to increased processor speed, so fifteen years we have computers that

Â are 1000 times faster but 43,000 is due to algorithm improvements. And there has been

Â more even since then. So, the importance of knowing good algorithms and working

Â with good algorithms is unquestioned, in this case. and, of course, this algorithm

Â is still being heavily, heavily studied. so, it's starting with Dantzig's simplex

Â algorithm and other basic knowledge about the algorithm and about linear programming

Â that was applied in practice in nineteen 48.

Â The, the basic idea of linear programming actually precedes Danlig, Dantzig, and

Â actually wound up with a Nobel Prize. but what's interesting is rhere was, that

Â algorithm was so good, there was actually not much development on it for about three

Â decades. But then, an amazing surprising result Khachiyan proved that you could

Â solve linear programming problems in polynomial time. that was quite a quite a

Â shock at the time and we'll talk more about that in the last lecture of the

Â course. And then since then, knowing that there are algorithms that potentially

Â could beat simplex, people have actually developed algorithms that do beat simplex.

Â And so, there's still a great deal of research going on, on the simplex on

Â implementations of linear programmi ng solvers. So that's quick outline of what

Â it means to implement and solve linear programs.

Â