0:00

Okay, so welcome back. This is Discrete Optimization: Mixed

Â Integer programming part 4, and this is going to be quite exciting.

Â So, what we going to see today some really beautiful mathematics with very

Â interesting computational results. Okay.

Â So we going to talk about polyhedral cuts.

Â Okay. Very different from the cuts that you

Â have seen before; talk about warehouse location and node covering.

Â Okay. So this is, this is the motivation; so we

Â are going to talk about the convex hull. Right?

Â So this is the linear relaxation that we saw last time of one of, one of very

Â simple mixed integer program. Okay?

Â Now, in practice what we are really interested in, are these interger points

Â that you see. Okay?

Â And one of the things that we going to look at is the convex hull of these

Â particular points, that you see on the screen at this point, okay?

Â So this is 2D, okay? This is a larger 2D example.

Â Once again, this is the linear relaxation of a MIP.

Â These are all the integer solutions that you see there.

Â That's the real thing we are interested in, right?

Â And this is the convex sort of disguise/g, the convex envelope.

Â That means you know, you take all these points, and this is essentially the

Â convex, the smallest convex set, which include all these points.

Â Okay? So this is 3D, so this is a 3D linear

Â relaxation. You see all the integer points inside,

Â okay? Once again, you could take the convex

Â cell, that's you know, this polyhedral that you see there.

Â And that 's the convex set of all the integer points.

Â Okay? Now, why am I talking about this?

Â Okay? Assume that you have this beautiful

Â convex hull. You have this rigid, this polyhedra

Â there. Okay?

Â What could you do? Okay?

Â Think about it, what could you do? Well, one of the things you can do is

Â say, [SOUND] I'm going to use an LP here, optimize my objective function.

Â And by now you know that if you optimize this finger, you [UNKNOWN] it a vertex

Â right? What is that vertex?

Â It's going to be an integer solution, and that's going to be the optimal solution

Â to your MIP. So if I had this convex solved, okay, I

Â could use linear programming and I would get the optimal solution immediately.

Â Isn't that beautiful, right? So, in a sense, this is what we are

Â going to talk about. We're going to talk about these

Â polyhedral cuts, which are essentially these facets of the convex hull, okay?

Â If get all of them, we would just use linear programming and we would be very

Â happy, okay? Now, these cuts, okay.

Â They're valid, okay. They don't remove any solution.

Â I took the convex envelope of all the interger points.

Â I'm not removing any of them. They are all included, right?

Â And then they are also as strong as possible, right?

Â And so, if I have all of them, I could just use linear programming and I would

Â find the optimal solution you know, using the linear program.

Â Okay? So, so, why are these you know,

Â polyhedral cuts very interesting? Because they are exploiting the problem

Â structure. In that sense, they are completely

Â octagonal to the syntactic cut that we saw in the last lecture, which were based

Â on the tableau. Okay?

Â So, basically what we're going to do here is look at specific constraints or

Â specific properties of the problem, and then generate these polyhedral cuts.

Â Okay? Now they share of course the same spirit

Â as the syntactic cuts. Right?

Â So they are valid, they don't remove any solutions.

Â Okay? They're always valid, they're just

Â strengthening the, the formulation of the MIP.

Â They are also cutting; we want to generate those that are cutting the

Â optimal solution of the linear relaxation.

Â Right? So such that you know, we actually you

Â know, get closer and closer to the, the integer solution.

Â Okay? And of course we don't need to generate

Â all of them, we can generate enough such that we can solve the problem

Â effectively. Okay?

Â Now a particle in a, in a particle context, an application may use many of

Â them. It's like you know, global constraints in

Â constraint programming. You may look at very different classes of

Â code and generate these cuts. Okay?

Â They will exploit different sub-structure of the application.

Â Okay, so why is a facet of this convex hull [UNKNOWN]?

Â That is a critical thing that we're left to actually define.

Â And so, what your going to see is that the mathematics behind it is easy.

Â Of course, you know, what the, the whole creativity is going to be finding this

Â facet and then proving this simple proof. Okay?

Â So, this simple technique. So, to find a facet, what you need to do

Â is essentially find f you know, n, a finally independent solution, so point

Â which satisfy the facet at the equality. They have to be on that facet, these

Â points. Okay?

Â And so a final independence is, is, is a simple thing to show.

Â What you need to do is that, if you have a set of n points, they're going to be a

Â finely independent, if when you extend with one more coordinate, then put a 1

Â for that coordinate for every one of them, then you can show that they are

Â linearly independent. And of course linearly independence you

Â know what this is. It's simp, simply saying that, if you

Â have a, a combinations of all these points using alpha 1 alpha n, okay, for

Â these end points. And if that is equal to 0, then all these

Â alphas have to be 0. So, in a sense, you know, showing that a

Â facet is that, that, that's, that's something, so many quality is a facet is

Â basically going to be trying to find these points and showing that they are

Â finely independent. Okay?

Â So, let's look at the particular example that we saw before.

Â Okay, so this is warehouse location. Remember, we had to open or close these

Â warehouses, okay? And once they are open, they can serve

Â customers. The goal was to minimize the cost of

Â opening this warehouse and then serving the customers.

Â Okay? So we had this beautiful MIP model.

Â We have a very strong relaxation here, which was basically using this very

Â simple constraint, right? Simple inequalities, two variables,

Â stating that you know, you can only serve a customer C with warehouse is W, if

Â warehouse W is open. You know, warehouse W is open means xw is

Â equal to 1. And then obviously, and then obviously,

Â and then obviously what you need to have is, you need to have you know, ywc, small

Â or equal to the value of xw. That's what you see over here.

Â Okay? So in a sense, what you can wonder is how

Â these inequalities facets of the convex hull?

Â Okay? So these simple inequalities are the

Â facets. And so, what you need to do is you need

Â to look at these constraints, okay, at equalities, instead of in, an ine,

Â inequalities, and find n points okay, such that you know, they are, find n

Â points which are finely independent. Okay?

Â So I'm going to cheat a little bit here, I'm going to assume that we have only one

Â warehouse. Okay?

Â And then you can generate these points here which satisfy these constraints and

Â inequality. Okay?

Â And so, what we are looking here is we're looking at warehouse w, and we are

Â looking at customer one. Right?

Â And we are looking at this very simple constraint, and what you see at the end

Â points is that we're going to assume that you know, xw is zero.

Â Okay? if xw is zero, then y1w, has to be 1w1,

Â yw1 has to be zero and that's the first point that you see there.

Â And all the other customers, we put them to zero as well.

Â Okay, so we satisfy that constraints of the equality.

Â Now, when this guy is 1, okay. So when xw is 1, okay.

Â So then, to satisfy the constraints of the equality, we have to serve the

Â customers by, with that particular warehouse.

Â And there's a second point that you see. And then the other point that you see

Â that will simply be saying, ooh, but I can take these other customers, and say

Â that he's using that warehouse as well. It doesn't impact that var, that

Â constraint, but we, we can find different points, knowing they're going to be

Â finely independent. Of course you can generalize that if you

Â have many warehouses by you know, finding the right values for these different

Â warehouses. Okay?

Â So, putting a one for every one of them in every one of these points.

Â Okay? Well, to, to get essentially n points.

Â Okay? So, in a sense, this is the kind of ways

Â you prove that something is a facet. You look for these points.

Â Okay? And you are trying to show that they

Â satisfy the constraints of the inequality and they are independent of each other,

Â of each other. Okay?

Â So, let me look at another example now, because this was kind of too easy.

Â Right? So, the formulation that we had, had

Â already some of the facets. So, we're going to look at another

Â problem where it's not so obvious. And this problem is called node packing.

Â Okay? And the key idea is once again, you have

Â a graph you know, with a variety of vertices and then you have edges between

Â these vertices. And what you want to do is to find the

Â largest number of nodes such that any two of them is not connected by an edge.

Â Okay? So we want to pack as many, that's why

Â its called node packing right? You want to pack as many vertices as you

Â can, but they can't be connected by an edge.

Â Okay? So this is a packing you know, this is a

Â beautiful packing with one vertex. Okay?

Â And of course as soon as I take you know, vertex 1, I can't take anything else

Â because they are all connected to it. Okay?

Â So this is not a packing which has two nodes.

Â Okay, okay? So, it has nodes four and nodes six.

Â Okay? So these two guys are not connected with

Â each other, and therefore this is a packing.

Â Okay? Now, a question that you may ask is that,

Â is there a better packing? Is there a packing where I can actually

Â pack three nodes? Okay?

Â So look at this graph and try to think, and you see this mapping with three

Â nodes? Okay?

Â So, we're going to come back to that. Okay?

Â But what we're going to first do is to look at node packing and see if we can

Â express it as a MIP. Okay?

Â And once again, it's always the same story.

Â Right? What are going to be the decision

Â variables, and what are going to be the constraints.

Â Okay? So in this particular case, the decision

Â variables are easy. We're going to decide, if a particular

Â node is actually inside the packing or not.

Â Okay? So a particular vertex is inside the

Â node, and then we leave constraints forl you know, linking two nodes.

Â And what we know is that if we take this node, any node which is adjacent to it,

Â which means that there is an edge between them, cannot be inside a packing.

Â So, we'll put this linear inequality to tell you know, to tell the MIP that you

Â know, this node and that node, because they are linked by an edge, cannot be

Â inside a packing at the same time. Okay?

Â And this is the formula that you see here.

Â Okay? So, you see that the you know, x1 to x6

Â are basically asso, are variable which are associated with every one of the

Â vertices. Okay?

Â They're basically telling you if that particular vertex is in the packing.

Â And then you see this very simple inequalities that are basically telling

Â you, ooh you know, node x1 and node x2 cannot be packed you know, at the same

Â time because they are linked by an edge. So, essentially for every one of the

Â edges you have these simple inequalities. And obviously every one of these

Â variables has to be zero and one, and the linear relaxation here is basically

Â going to relax this, ins by, by saying that they are between zero and one.

Â Okay? So, this is the linearly relaxation of

Â the problem. Okay?

Â So when you look at this, you can think ooh, but what is the linear relaxation

Â going to to do? And what I told you last time is that the

Â linear relaxation is going to be evil, it's never going to do what you want.

Â Right? It's always going to try to maximize in

Â this particular case, you maximize the largest path of the packing here.

Â Okay? So, it's going to try always to do to do

Â better. Okay?

Â And in this particular case. Okay?

Â The linear relaxation. Okay?

Â Is basically going to assign one half to everyone of the decision variables.

Â And then for value of the objective function is going to be six time one half

Â which is three. So, it's basically telling you that the

Â maximum you will ever be able to do is to pack three nodes on this simple example.

Â Okay? But of course this is terrible.

Â Right? So this, this is [UNKNOWN] fraction, in

Â the worst fractional way. Right?

Â So can we do better than that? Okay?

Â So now we have to think. You know, you look at these problem

Â [INAUDIBLE] and, and you're asking yourself, is there any kind of properties

Â of the solution that would strengthen my relaxation?

Â Okay? So, we want to find you know, a property

Â of the solution. So essentially of, constraints which is

Â going to satisfy by all of the integer solutions.

Â Okay? And this particular case, there is a very

Â useful concept that we find everywhere in optimization, which is cliques, the

Â concept of cliques. Okay?

Â So when you look at this thing here. Okay?

Â So, you see this you know, you see basically two cliques.

Â Okay? And so, when you see a clique like that

Â with four vertices, and all the vertices in a clique are connected to all the

Â other ones, how many vertices can you actually you know, select in a packet?

Â Well, at most one, because that par, if you select one, it's connected to

Â everything else. So you can basically now select any of

Â the other vertices. Okay.

Â So as soon as you find a clique, you know exactly that you can at most, that you

Â can select at most one of the vertices in that clique.

Â Okay? So what are we going to do, is take the

Â formulation and add all the clique constraints that were present in the

Â instances that we looked at. Okay?

Â So this is and so we can do actually better than that, we're going to look at

Â the maximal clique. And what is the maximal clique?

Â It's the largest clique that we can find. Okay?

Â So, it's a clique that essentially cannot be extended further.

Â Okay? So, when you see, when you see this click

Â here, you can say oh, but there is a clique with three vertices.

Â Okay? But that click can extended, so we won't

Â state that popular clique. We're going to take the clique with four

Â vertices instead, because it's larger and therefore it's more constraining.

Â Okay? So we're going to take this maximal

Â clique for every one of these, well, for every you know, collections of vertices.

Â Okay? So, as soon as you have a maximal clique.

Â Okay? Let's say for node one to five, you can

Â state the constraint which says that most one of these nodes can be selected inside

Â inside, in, in the solution. Okay?

Â So, in this particular case, we said that the sum of x1 to x5 has to smaller or

Â equal to one. Okay?

Â So now, note what we can show actually, we can show that actually a, a maximal

Â clique is a facet. Okay?

Â And if you want to actually know what the proof is doing, is the kind of, the kind

Â of reasoning for finding all these finely independent points that you will have to

Â do is to say ooh, but if I have clique like this, and if I ever note a vertex.

Â Okay? We know that since this clique is

Â going to be maximal, that it cannot be, there is at least one edge with one of

Â these vertices that, that is not there. Okay?

Â So, it's missing at least one edge with one of these vertices.

Â And that's how you will be able to find this finely independent point.

Â Okay? So, t?his is essentially the MIP node

Â with this, with this, with all this constraints.

Â Okay? So, we remove all the other ones, because

Â they were basically subsumed by the clique constraints.

Â Okay? They were smaller than the clique

Â constraints, so the fact that x1 and x2 could not be assigned the same value here

Â is subsumed by the fact that we know that there is a clique between x1, x2, and x3.

Â And therefore you know, it subsumed the other one.

Â So we basically have all the clique constraints here.

Â Okay? And that's the new, that's the new linear

Â relaxation steps we are going to use. Okay?

Â Now, once again, you know, this linear relaxation is really clever.

Â What is it going to do? Well, one of the things that it's

Â going to notice of course is that x1 is very, is, is all over the place.

Â So to, so to, so if we give a value to x1 essentially, it's goning take you know,

Â some value for every one of these constraints.

Â So, what it's going to do is basically ignore x1, put x1 to 0 and then once

Â again you will have a completely fractional value for all of the other

Â constraints. Okay?

Â So when you do this, you know, what is the, what is the linear relaxation giving

Â you? It's giving you five times 1 half.

Â Okay? Which is what?

Â Which is 2.5. So we know already that the best packing

Â we will ever be able to do is two. Okay?

Â So that's, that's essentially tell, it's, it's actually a strong relaxation in this

Â part here in the case. Okay?

Â but one of the things you have to remember here, okay, is that we don't

Â have the convexal of the node parking you know, the node parking port, you know,

Â solution points. Right?

Â Because this solution is [UNKNOWN] fractional.

Â So what that means is that, these clique constraints are not completely

Â characterizing the problem, you will need all the kinds of cuts.

Â Okay? And they're so beautiful cuts that you

Â can generate, as well for doing this. But you know, this was just illustrating

Â some of the cuts. You improve the linear relaxation you

Â know, nicely, but they are not necessarily capturing all the [UNKNOWN].

Â Okay? So, let me show, let me talk a lot about

Â a bit another example that we saw before. Okay?

Â So, this was graph coloring. Okay?

Â So, we saw that you know, what, so what we were looking for is how many colors do

Â we to actually need to color this map of Europe.

Â Okay? Well, a subse a small subset of Europe.

Â Okay? And so one of the things that we have is

Â having this linear you know, this MIP representation, where we would assign you

Â know, decision variable zero one variable for to, to decide whether a particular

Â vertex 'v' was assigned color C. Okay?

Â So, so when you look at this relaxation, what we had was the fact that you know,

Â the objective function had to be greater, greater than all the possible colors.

Â That you know, every, every, every country had to be assigned to exactly one

Â color. And that you know, adjacent country could

Â not be assigned to the same color. That's what we had.

Â Okay? So, the, the, this linear relaxation was

Â actually pretty terrible, so we, we, the only thing that we had was we need at

Â least two colors. Okay?

Â Which is kind of obvious. Right?

Â So, if two countries are adjacent, you need two colors.

Â Okay? So can we do better?

Â Can we actually find you know, some new constraints that can actually strengthen

Â this linear relaxation. Okay?

Â And so what we saw you know, what we just saw is the concept of clique.

Â Right? So, can you actually you know, use that

Â concept in this particular case as well. Okay?

Â So, let's, let's look at this graph coloring problem in the sense of the

Â graph. Okay?

Â So you see basically you know, basically you have an edge between, you have

Â basically a vertex for everyone of the country and then a link as soon as they

Â are adjacent. And what you see now is that there are a

Â bunch of cliques. So this is a clique of size three, this

Â is a clique of size four. Okay?

Â So you see that there are basically plenty of cliques there.

Â And once again, once you have that, you, you can do some kind of reasoning about

Â the number of colors that you need in a clique.

Â Okay? So if you have a clique of size three.

Â Okay? How many colors do you need?

Â Well, all these vertices are adjacent to each other, so you need at least three

Â colors. If you have four you know, how many

Â colors do you need? Okay?

Â So in a sense, what you can do is add these clique constraints, and these are

Â some of the constraints that you can use. Okay?

Â So here this is adding a, a, a three clique, a three clique.

Â Okay? And what this is basically telling you is

Â that if you look at the value of these vertices that are in a clique, and if you

Â sum the value that they have, they have to be greater or equal to three.

Â Why three? Because we have color zero, we have color

Â one, we have color two, that's the smallest thing we can have with three

Â colors. And that's three.

Â Okay? If we have a four clique, we can do

Â exactly the same thing, but now it's zero, one, two, and three, which is 6.

Â So we're basically sticking all these four vertices.

Â Okay? And we are basically saying that the sum

Â of these colors that they have okay, has to be at least six.

Â Okay? So we can put these constraints in there,

Â once again clique constraints, and we can see what the relaxation is going to do.

Â In this particular case, what you can see is that you know, for the part for, for,

Â when you add these three you know, these three clique constraints the number nodes

Â becomes.. You know, the, you know, the optimal

Â solution takes nine node, to find the optimal solution you need nine nodes and

Â the proof of optimality require 41 nodes. When you use this four cliques.

Â Okay? So you get five nodes, and you get four

Â for finding the optimal solution and nine nodes for proving optimality.

Â Okay? So, what is nice is that you need at

Â least three you know, you, you basically can add these constraints, the strength

Â from the relaxation, they shorten the proof of optimality and finding the

Â optimal solution. Okay?

Â So, this is basically giving you the concept of polyhedral cut.

Â Okay? So the way to think about it is that we

Â are looking at constraints, okay, that are actually going to tighten the

Â relaxation by using the structure of your problem.

Â Okay? You can show that they are facet of the

Â convex hull, that's really the best thing you can do, because this catalyst is

Â strong as possible in the sense. If you had all of them, you would just

Â use linear programming. Okay?

Â So what will do, what we'll do next time is look at more interesting ways of

Â generating these polyhedral cuts, and we'll look at them for you know, also

Â [UNKNOWN] the traveling sales man plan. Thank you.

Â