0:00

Okay, discrete optimization, mixed integer programming, branch and cut,

Â second lecture. Okay?

Â So, this is the real stuff now. Okay?

Â So one of the things that we saw in the last lecture is that we can generate

Â these polyhedron cuts. Okay?

Â But one other thing that we did was basically adding them all inside Inside

Â the, you know, formulation, the beginning.

Â That's not exactly how branch and cut works, so what I want to go and, what I

Â want to do now, is take this concept and then tell you really what a branch and

Â cut algorithm is doing. Okay?

Â So we're going to see two things, cover cuts, okay, and you'll see why, you know,

Â we, we, we'll talk about them, they're very, you know, very frequent in in mixed

Â integer programs, and then we move to the TSP, which is probably the biggest

Â success stories of optimization. We can solve really really large TSP

Â optimally at this point. Okay, so cover caps first.

Â Cover caps we are looking at constraint which are inequalities.

Â Okay, linear inequalities, a sum of coefficients and variables which are

Â smaller than b The simplest kind of stuff that you find all the time inside your

Â math. Okay?

Â So sum, you know, coefficient, ai, aj, you know, multiplied by a, a binary

Â variable, xj 01, is to be smaller than b, okay?

Â So, so you know can we find facets for these constraints okay.

Â Can we find facet of the pole which have arrived from these constraints okay.

Â And the key concept here you know the title is cover cut right is the concept

Â of cover and essentially what you need to think about is a cover is going to be a

Â set of variables if you set them all to 1 you're going to violate the constraints

Â okay. It's basically telling you these

Â variables not all of them can be 1. Okay?

Â And so basically, you know, is what you do is you look at the subset of these

Â variables. Okay?

Â You look at the coefficients and if you sum all these coefficient, they are

Â greater than b. And therefore, you know, you know that if

Â you select all these variables, put them to 1, you going to violate the

Â constraints. Okay?

Â So, that's the concept of cover. Basically, these variable cannot all be

Â selected at the same time. Okay?

Â And so a cover is going to be maximal, minimal, okay so that's the opposite of

Â maximal too, it's going to be minimal if you remove one of these variables, okay,

Â it's not a cover anymore, okay? It's like the strongest possible cover,

Â okay? So, so, so when you look at these covers

Â now, so we have a particular constraints there, we take a cover, We, this, so I'm

Â going to show you an inequality which is valid.

Â Okay? And so what we do is basically that's

Â what I told you before. Right?

Â If we select all these variables at the same time.

Â Right? We can evaluate the constraints.

Â So the constraints is basically say, saying, at least one of them has to be

Â zero, or the sum of all these guys ahs to be smaller than the size of this cover

Â minus one. You can't select all the variables.

Â This is what this is telling you. You can't select all the variables.

Â So in a sense the sum of them is to be smaller than the size of the set minus

Â one, or at least one of them has to be zero.

Â We can think about it hopefully, and that's what we're going to do anyway.

Â Okay? So these are the cover cuts, okay?

Â So if you look at these constraints here, this is a big constraint, right?

Â So, seven variables, with coefficient, you know, 11, 6, 6, 5, 5, 4, 1, OK?

Â And that has to be smaller, the sum of these things have to be smaller, these

Â variables. Have to be smaller than 19, okay?

Â Can we find a cover cut, okay? So that's going to be easy, right?

Â So we take in this particular case the coefficients by, you know, this is

Â decreasing order. I made it easy for you, right?

Â So this is 11, 6, 6. So that's already 23, right?

Â So this is, this is greater than 19. So this is This is a cover, okay.

Â And, but, because of that, you know that x1 plus x2 plus x3 has to be smaller or

Â equal to 2. You can only select two of these

Â variables, hence, you know, to have a feasible solution.

Â Okay? Now let's look at x4, x5, x6, x3, x4, x5,

Â and x6. Okay, if you sum all these things

Â together Once again you get 20, this is greater than 19, so this is also a cover

Â and this is the cover inequality that results from them, okay.

Â The sum of these four variables has to be smaller or equal to 3, okay.

Â So this is essential, so you, you start from inequality like this with, you know,

Â big coefficient, and then you have a simple cover inequality there, which is

Â limiting the set of the sum of, you know, a subset of the variables.

Â Okay? These are cover cuts, okay?

Â Now, you can extend, you can make them stronger, and the key idea is that if you

Â ever cover cut, if you ever cover cut, you can extend it by taking other

Â variables. Whose coefficients are greater than all

Â the coefficients of the count. Okay?

Â So this is what this formula is explaining.

Â So know instead of leaving the sum all over the set, you have this extended set,

Â okay? You know you extend the cut c, okay, and

Â what you do is you take up usually c but also all the other you know variables

Â whose coefficients is greater than all the coefficients of, of C.

Â Okay so, you know, by definition that these variables okay so they will, they,

Â they will essentially, they will, the cut will be, will be stronger.

Â Let me show you an example, it is easier on the example.

Â So you see essentially the, the cover cut that we have seen.

Â We add the cut which is x4, you know, x3, x4, x5, and x6 has to be smaller or equal

Â to 3. You know, you see that the coefficient of

Â x1 and x2 are greater than all these coefficients, so you make the case

Â stronger by adding x1 and x2 on this. If the coefficient would be weaker it

Â would not be you know, as strong. But here the coefficients are always at

Â least as strong as the coefficients of the other ones, okay?

Â So, that's the stronger cover. Okay so this is the idea of branch and

Â cut, okay? So the branch and cut, you know, consists

Â of modeling first if the problem is okay. Now have a bit, okay.

Â So you can use the linear relaxation if the linear relaxation is giving you an

Â integral solution. Finish okay you have an optimal solution.

Â Otherwise what you do is, you basically find the polyhedral cut, okay, which cuts

Â the optimal solution to the linear relaxation.

Â That's the key step, okay? And obviously you would like to have a

Â facet if it's possible. And so if you find such a beautiful

Â thing, right? So what you're going to do is, you go

Â back to step 2, you solve the linear relaxation, it's stronger.

Â Why is it stronger? Because you cut the linear, the optimal

Â solution to the linear relaxation, and you iterate this stuff.

Â Okay, and at some point you can't find any of these cuts anymore because you've

Â found all of them or it's too tough to actually find them okay.

Â And then what you do is branch and then you can repeat these steps, or you can at

Â some point decide that's it's not even worth generating more cuts okay.

Â But this is the basic framework for branch and cut okay.

Â Now the key point here okay, is that we have to generate these polyhedral cut

Â that are going to remove the value of the optimal solution.

Â Okay, the linear relaxation, okay? So this is what we need to do, okay?

Â So we are not going to do what I did in the previous lecture, which is just

Â listing all the possible cuts we could find.

Â Okay, what we want to do is, at some point, we look at this linear relaxation,

Â and you say, wow, okay, so I want to cut it.

Â Okay? So to give you an analogy, you know, you

Â have to think about, you know, feeding babies, right?

Â So so there are two types of parents, right?

Â So when you have a night, you know, a new kid.

Â So what you do, you have the parents that read the books and do everything by the

Â rule. Okay.

Â It's written in the book that this kid has to basically you know, be eating

Â every three hours. So this poor kid is sleeping but you

Â know, three hours has gone so they wake up this kid and feed the kid.

Â Right. And so then try to put the kid to sleep

Â and that never works of course. Okay.

Â So these are, you know, these are you know, essentially generating all the

Â cuts. Okay?

Â and then they are the cruel parents, right?

Â So I'm not saying this is better or worse, right.

Â But the other parents are saying this kid is you know, sleeping, you know

Â beautifully. I'm going to let this kid sleep and then

Â when the kids you know, wake up. And yell you don't know why, but give

Â basically feed that part. That's demand feeding.

Â That's what we are doing. We generate these cuts by demands.

Â Okay. So this is what we going to do and that's

Â called a seperation problem okay. So what you know is that you have this

Â optimal solution, okay, to the linearly relaxation and what you want to do is you

Â want to look. A particular cut, a particular class of

Â cut that's going to remove the ultimate solution to this linear relaxation, okay,

Â so this is what we do for cover cut, okay.

Â So, we've this optimal solution to the linear relaxation and we are wondering,

Â can I find cover cut that's going to remove this optimal solution of the

Â linear relaxation, okay. So we know that the cover ca-, the cover

Â inequality, okay? So this is this b's, re-, re-, vir,

Â saying that the se-, this subset of variables, if I sum them to one, okay,

Â is, thas, if I sum them, you know, they, they can't all be one, so they have to

Â sum, and, you know The sum, their sum should be smaller or equal to the

Â cardinality of the cover minus 1. So that can be rewritten oh at least one

Â of them maybe 0 so you do minus the variable okay.

Â So that's going to be 1 if the variable is 0 or 0 if the variable is 0 okay.

Â And so you're basically saying here is that the number of variables which is 0

Â is to be greater than 1 okay. And so essentially finding a cover cut,

Â okay, will require two condition. Okay?

Â So, we one of you see to find a cover, that's the second conditions that you

Â have. Okay?

Â So when you take the sum of this, when you take, when you take the sum of the

Â elements in C, okay, the elements in the cover.

Â That is to be greater than, you know, the sum of the coefficient is to be greater

Â than the right hand side of the constraint.

Â Okay? And then we want to be sure.

Â We want to, we want to have the property that in the linear relaxation these guys

Â are smaller or equal to one, so they are basically violating the cover cut, okay?

Â So essentially what we want to find is the cover cut that vi, that is violated

Â by the linear relaxation, okay, and is a cut, okay, is a cover cut, okay?

Â So we want these two properties. If you find that essentially what we're

Â doing is generating these cuts on demand. Okay?

Â So you say ooh, I have this linear relaxation.

Â Okay? I want to find a cover cut, and that

Â cover cut has to violate this, has to be violated.

Â That's what you are trying to do. Okay?

Â So, essentially, this is you know, we have two constraints here, okay, so we

Â could view that as a feasibility problem But we can also view that as an

Â optimization problem because that's what linear programming does.

Â Right so or you mixed integer programing does.

Â So what we do here is minimize this expression.

Â Okay, that's the expression that we want to be smaller or equal to one.

Â Smaller than one and then we have a definition of a cover cutter so

Â essentially what you do is you look for these variables that I okay that J in

Â this particular case okay. So you look at all of them these are

Â essentially the you know all the variables that indices of the variables

Â inside the inside the constraints you want to select a subset of them which

Â violate the cut and minimize this function okay.

Â And so now if the value of this function is smaller or equal to 1 you have a cover

Â cut. Okay and all the elements which are equal

Â to 1 are the cover. So essentially if you solve this linear

Â program, okay? This this mix integer problem okay.

Â So you have a cover cap, you find a cover cap.

Â Now you tell me wow wow wow you guys are trying to solve mix integer program and

Â you want to solve another mix integer program to solve that integer program and

Â you want to do that repeatedly you know what are you doing?

Â Okay, and so let me address that question in a moment.

Â Okay, so but first let me give you an example okay.

Â So, so, what you see here is the, is a big constraint I say bigger than the

Â other time, you know, the other one that I've showed you before, the right hand

Â sided 178, and you see coefficients go, you know, ranging from 45 to, you know,

Â 125. Okay.

Â Now, the linear. Assume that the linear relaxation is

Â giving you this optimal solution, okay. So, you see that the 1st one is 0, the

Â 2nd one is 0, the 3rd one is 3 quarter, the 4th one is 1, 1/2, and the next one

Â is 0, okay? So, how do you generate.

Â Okay, so what is the, what is the big, big program we have to solve to find the

Â cover cut? So this is the, this is the mid program

Â actually doing, solving the separation problem, okay?

Â You minimize that one, why is that 1? Because essentially you take 1 minus the

Â value in the linear relaxation, which is 0, okay?

Â So that's why you have that 1, then you have that 2 for the same reason.

Â Then you have 1 fourth of that tree. Why?

Â Because you see essentially 3/4 a year so 1 minus 3/4 is 1/4 and that's how you get

Â this objective function. That's essentially what we saw in the

Â previous slide. And then, of course, you have to have a

Â cover cut. Okay.

Â And so you want, basically, that the choice of the value, the 0 1 values for

Â disease is going to basically be greater than 178 when multiplied by the

Â coefficient. Okay?

Â So essentially this is, this is the mid program that we have to solve for finding

Â this, for finding this cut. So this is the separation problem.

Â Okay, if we find the solution to this, we will separating, you know, the, we left

Â final cut, which essentially separate. The, the solution to the linear

Â relaxation and the convex cell. Okay?

Â So this is, you know, I told you this is this beautiful mixed integer program, and

Â so the question that I'm asking you is, you know, does it remind you of

Â something? Okay?

Â So it's minimizing your linear sum of 0, 1 variables subject to sum, you know, the

Â sum of these variables is to be greater than something.

Â Okay. Well, look, if I do a variable

Â transformation. Okay, if we write the jen G into one

Â minus Y G. Okay, what do I get?

Â Well, this minimum is going to be a maximum.

Â Okay. So we're going to maximize some of the

Â variables. And this inequality here, you know, which

Â is greater than, is going to turn into an inequality which is smaller than, is

Â going to be a sum of variables with coefficient.

Â Smaller than something, so what we have there is a knapsack problem.

Â Okay? So it's like, you know, we have closed

Â the loop, in a sense. We started with the knapsack and now we

Â are generating these branch and cut using a knapsack algorithm here.

Â Okay? So, this essentially the way you can do

Â branch and cut, okay. So you basically, you basically solve the

Â linear relaxation, and then you have various class of cuts, and then you say,

Â ooh, can I generate a cut That's going to separate, separate, the optimal solution

Â to the lineal relaxation. And you saw this separation problem and

Â find this guess on demand. So, before we move to the TSP which is

Â going to be kind of You know, big notation.

Â So, I want to go into a little bit of history here.

Â Okay. And we're going to talk about the, the

Â Seven Bridges of Konigsberg. Okay.

Â And so is the city where Euler, the beauty, you know, the great mathematician

Â Euler was living. And this is a very interesting city,

Â because it is going to have, as the title indicates, seven bridges.

Â Right, so let's zoom. You see the river, you know, in blue,

Â okay. And you see all these bridges in red, I

Â assume. Okay?

Â And so what Euler asked, you know, I think this is what he was spending his

Â free time doing. He said, can I find a tour which would

Â go, you know, which would walk over every one of these seven bridges exactly once?

Â Okay? So this is, for instance, the tour that

Â you see here, okay, which is basicaly going over five of the bridges.

Â Okay, we can go to five of them exactly one, can we go to seven exactly once.

Â Okay? And so what Euler did is one of the

Â founder of use in graph theory, is basically represent that is beautiful

Â pictures. Okay, right?

Â And then you looked at that picture and said this is really a graph.

Â Okay. So, I can associate a note with every one

Â of these land masses. Okay?

Â And know the edges between these vertices are going to be the bridges, okay?

Â So, you see that here, you know, I have 2 bridges between these 2 you know

Â landmass. Okay and so the question was can I have 2

Â can I have 2 that visit every one of the edges exactly once okay.

Â So an earlier look at this problem and say no, I can prove that this is

Â impossible you don't have to try working this thing forever.

Â Okay, and one of the ways, the way you actually prove that is by looking at

Â every one of these vertices. Okay, and of, usually when you walk from

Â one vertex to another, you go to another popular point and then you go to another

Â one. So, as soon as you enter a vertex, you

Â have to leave it, right? So, that basically means that if you go

Â over all the bridges, okay, you have to make sure that the degree of every one

Â has to be even. Otherwise you will come and there will be

Â no bridge to, get out of it. Right?

Â And so he could prove that in this particular case you could never, never

Â walk exactly once on these seven bridges. Because, you know, these, all these

Â vertices here at, you know, an, an odd degree.

Â Okay? So in a sense he invented kind of these

Â degree constraints. And they are going to be very useful for

Â our next topic. Which is the traveling salesman problem.

Â Now, the traveling salesman problem can be seen as the, you know, the dual kind

Â of, well, it's not really dual from a mathematical sense, but it's basically

Â the opposite of the, of the, of the earlier two problems.

Â What we want to do here is to visit exactly, you know, all of the vertices

Â exactly once, not the edges. Okay?

Â And there is a big difference between the yellow tour and this, the Traveling

Â Salesman Problem. One is easy to solve, the other one is

Â very complicated to solve. Okay?

Â So, so what we do in the TSP is we want to find the two which is visiting every

Â one of these vertices exactly once. Okay?

Â And so the first thing that we have to do now is to try to model this as a MIP.

Â Right? So when we do one to do branch and cat,

Â what you do is you model this thing as a, as a MIP.

Â Okay? So you have to choose a decision

Â variable, the constraints, the objective function.

Â Okay? Now there are many models to do that,

Â right? So, and we're going to use one which is

Â very, very good from a computational standpoint.

Â But you see we've got to build it step by step.

Â Okay? It's going to be kind of nice.

Â So, the decision variable here, I'm going to be deciding whether an edge is

Â part of the tour or not. So, we're going look at every edges.

Â Okay? And we, we'll associated decision

Â variable, a binary decision variable, depending is whether that particular edge

Â is part of an optimal tour or not. Okay.

Â And so the constraints are going to be this degree constraints.

Â We know that if we go to a vertex, we have to get out of the vertex, okay.

Â So, and since we can visit the vertex exactly once, okay.

Â So, it's not like it has to be even, it has to be exactly two, you want to go the

Â vertex and you want to get out. Okay.

Â So essentially if you look at the vertex Two of its adjacent edges, okay, exactly

Â two of these adjacent edges have to be one.

Â Okay? So this is the first, so I'm going to

Â kill you with notation now, but this is essentially the basic of the, of, of the

Â model, of the first, you know, MIP model. Okay?

Â So I told you I'm going to kill you with these notations.

Â So I'm going to try to go slowly, so that you can.

Â Memorize everything, okay? But essentially the decision variables

Â are going to be very easy. You know, for every edge e, you have Xe,

Â which is a decision variable, which is zero if you don't get the edges in an

Â optimal solution in one if you take it. Okay?

Â No V is always going to be the set of vertices, E is always going to be the set

Â of edges. Okay?

Â And then the three notation are important.

Â Okay, so the three, the next three notation are important.

Â So, I'm going to use delta V. Okay.

Â As the set of edges which are adjacent to vertex V.

Â So, I'll look at the vertex and then I'll look at all the edges which has V as one

Â of their end points and thus delta V. Okay.

Â So, when you see delta V, it's the edges associated with V.

Â Okay. So far so good?

Â Okay. Delta V the edges adjacent to V Now delta

Â S for a set of vertices is a little bit more complicated.

Â Okay, so is the set of edges which says at least 1 of their end points inside

Â which has exactly 1 of their end points inside S okay.

Â So essentially they are adjacent to the set S.

Â You see a set S of vertices and these are edges which are which have an edge

Â between an element, a vertex of S and an element which is not in S.

Â Okay. So that's delta X.

Â So delta S, Delta V the set of edges adjacent to V delta S the edge adjacent

Â to you know a set of vertices which meets at least one of them okay.

Â And then we going to give F gamma S, which is essentially.

Â Oh the set of edges whose both endpoints are inside s, okay?

Â So this is like, you look at those sort of vertices, and you look at the edges in

Â between them. You don't go outside, okay?

Â So, so these are, own, the only notation that we going to use.

Â Then we have another, you, we going to say x, and then a set of, of, of edges

Â to, to denote the sum of the variables associated with every one of these edges,

Â okay. So, when I say x of e1, e2, e3 and so on,

Â that basically, the, the sum x for the fist edge, plus x for the second edge

Â plus x for the last edges and so on. Okay, so once we have this notation this

Â is how we can have a first mip model for the TSP.

Â Okay, so what we do is we minimize the the length of the two.

Â So we take the cost of every edges and multiply by the binary variable telling

Â you if you take the edges or not. Okay, so that's minimizing the cost.

Â What you see over there basically the flow conservation or the degree

Â constraints which are basically telling you that you every time you go to a

Â vertex you have to get out. Okay.

Â So and this is basically telling you that if you take the set of variable x.

Â Okay. Okay and, and the set of adjacent vertex,

Â the, the set of adjacent edge to the vertex v.

Â Okay. The sum of these variables is, will be

Â equal to 2. Okay.

Â Take the adjacent edge. Took the sum of the brilliant fiber

Â associated with these edges okay. It has to be equal to two.

Â And then obviously you know the edges have to equal to one okay so you take a

Â traveling salesman problem use that midformal solution and this is one of the

Â things that can happen to you. Okay you are going to get a solution on

Â this. Okay, so this is not a, usually a tool,

Â what you get is a bunch of sub tools, okay, tiny tools, okay.

Â So you have this one, okay, so you have this one.

Â You have this vertex going into that vertex going into that vertex and then

Â going back there, okay. So instead of going back to something

Â else, its basically going back to the first.

Â You know, vertex of this subtour. And then you have another subtour, a

Â third subtour, and so on. And obviously, you know, this is very

Â clever, right? So because you don't have to travel these

Â long distances between these things. Okay?

Â So that's terrible. We won't though, and one down.

Â So what we want to do is we want to look at a particular vertex like this, a

Â particular subtour like this, and we want to say, ooh, can I remove the fact that

Â this is a subtour? Okay?

Â So if you look at that it's three vertices and you see that, that you know,

Â you ask yourself the questions that how many edges can actually be selected

Â between these vertices, okay? And adversely you can't have three

Â otherwise you have a subtour, okay? So the maximum number of edges here for n

Â vertices is going to be n minus 1, right? So in this particular case it's going to

Â be two, okay? So, and these are called the subtour

Â constraints. Okay?

Â And so what I'm showing you here is a formulation and a better formulation, you

Â know, for the TSP, where we put these sub two constraints.

Â Okay? So what we are basically telling you here

Â is that if you take x and you so, if you take a particular vertex, a particular

Â set of vertex s. Okay?

Â And then you take all the edges, you know, which are inside that set.

Â Okay so that's gamma s. Remember gamma s are all the edges which

Â are between a vertices of s. You know that the sum of these edges, the

Â sum of the Boolean variable, are associated with these edges has to be you

Â know smaller than the communality of s. You know the set of vertices minus one.

Â Okay so you can't, you have to get out of there.

Â Okay? So that's what is this basically saying.

Â You can have, you know, S minus, canonality of S minus one edges, but you

Â have to be able to exit. Okay?

Â So that's what you have there. Okay?

Â So these are called the sub two constraints, okay?

Â So, you look at every subset and you place these particular constraints.

Â Okay. But what is the issue with these sub two

Â constraints? Well, once again there are exponentially

Â many of them. So, you don't want to write them all

Â down, right? Because now it's going to be kind of

Â long, okay. So what you want to do is exactly what we

Â did with the cover cuts, you want to generate them on demand.

Â Many of them you will never generate. Why?

Â Because you know, you take something, you know if you do a tour of the US You never

Â do caller connect Orlando and Seattle, right?

Â And then San Francisco. So, you know, generating a sub tool for

Â this thing is completely useless, it's use, the, the, the relaxation is never

Â going to do that. Okay?

Â So what you want to do is look at these sub tool constraints and see which one

Â are violated by the linear relaxation, okay?

Â So, you want to generate them on demand. Okay, and to do that you need a

Â separation problem. Okay, so how do you do the separation

Â problems. I'm going to give you an intuition, I

Â won't go into the detail, but it is beautiful, okay.

Â So what we, we can re-express the constraint by saying that, you know, the

Â set of the edges The set of the adjacent to us between a set of law to adhere

Â one.The cutting is to be set at minus one we sit on.

Â And this is what you see. I took a set yes and then I set a s which

Â is And, and exactly one of the endpoints is inside s, and so what I'm basically

Â saying here is that the sum of edges when one of the endpoint is in s, the other

Â one is outside, is s to be greater or equal to 2.

Â Okay? Which means, oh, I have to get in and

Â then I have to get out. Okay?

Â So that's what you, you can re-express it that way and once you re-express it that

Â way, the separation problems become a really nice combinatorial problems that

Â can be solved in polynomial time. The key idea here is that you will build

Â a graph. Okay.

Â With the same set of vertices, the same set of edges.

Â Okay. And the weight of the, and so for every

Â one of these edges, you are going to assign the weight of the, the weight of

Â the edges is going to be the value in the linear relaxation.

Â Okay. So, so, basically build, you know, you

Â have the same graph but now the edges get a width which is the, which is

Â essentially the value of the decision variables of that edge inside the linear

Â relaxation. And now if you, to separate, to find the

Â sub two constraints, ok? That is violated, what you're going to do

Â is try to find a min cut in that graph. So that means cutting the graph into two

Â parts, okay? Such, and, and, such that, you know, you

Â know, that, of minimal size, okay? To minimize the number.

Â Of edges between these two different subsets, okay?

Â And if then the weight of the cut is smaller or equal to two you have a

Â problem, right? So you are basically violated in the

Â subtour constraint, okay. So you have isolated the subtour

Â constraints between some of these vertices and you can, you, you can find

Â the subtour constraint and stand it inside the linear relaxation.

Â Okay and finding some of these cuts okay. So finding these cuts is polynomial time.

Â You know the simpliest algorithms which solves a bunch of max flow min cut

Â problems and generate the cut by them. But there are better algorithm to do that

Â acutally in practice. Okay so now you know to separate these

Â subtour constraints so can do this branching you.

Â Add them until you can't actually find any subtour constraint which is violated.

Â Okay. Does that mean that at that point you

Â have a solution to the TSP? No.

Â Okay. So why this is a solution.

Â Okay. Which satisfies all the subtour

Â constraints. All the degree constraints.

Â Okay. So you can't imagine how long I took to

Â generate this. Okay.

Â And, but it violates some of the subtour constraints.

Â Okay? It vi, well, it, it, no.

Â It doesn't violate the subtour constraint, but it says a fraction of

Â solution. Okay?

Â And they're actually cycles. Okay?

Â So what you see here, look at this part. You get a cycle here.

Â Okay? So all the values are one half.

Â When you sum, okay The sum is smaller or equal to 2.

Â Okay. It's 1.5, so it's smaller or equal to 2.

Â Which is fine, fine, you're not violating any of the sub 2 constraints but still

Â you have a cycle and you have fractional values there.

Â So, what does that mean? That means that, you know, the convex

Â side of the TSB is not characterized by the sub 2 formulation that I've shown

Â you. We need more of these cuts.

Â Okay and so, you know, in, it's the whole industry to actually generate very good

Â cuts for the TSV okay. And there are some beautiful results in

Â that space. And so I'm going to, I'm going to, I'm

Â going to give you one class, okay, which is very useful in practice which I'll

Â call the comb cut, comb, you know like combing.

Â Okay, and so the basic intuition here is that if you look at the two and take a,

Â you know, kind of an of a line. Okay.

Â And wonder how many of these edges are intersecting.

Â You know, you are going to intersect an even number of that, okay?

Â Whatever you do is always going to be an even number, okay?

Â So, now you can start reasoning about these crossings and that's what comb cats

Â are doing. You know, if people in optimization are

Â really created in finding good Okay, so what is a comb cat.

Â Okay, so this is something that is a handle and a bunch of teeth, okay.

Â And then you have nodes, where you have edges, which are purely between elements

Â of the handles, and then edges which are purely inside the teeth.

Â and then you have edges between the handle and the teeth.

Â And the key idea is that what you want is what the comb is going to do is bond the

Â number of edges that you can have inside the, purely inside the lid because that

Â