0:00

Okay, so this is Discrete Optimization and Mixed Integer Program.

Â We'll talk about MIP. So, this is one of these magical tools

Â that we have in optimization, so it's going to be very different from what you

Â have seen so far. And this is a tool, when it works, it's

Â really amazing. And sometimes it works, sometimes it

Â doesn't, but you will see it's really a magical tool.

Â so what I'm going to do in this lecture is give you a basic introduction to Mixed

Â Integer, Mix Integer Program, okay? We call them MIP, okay.

Â I'll talk about MIP all the time today. And then we talk about branch and bound,

Â which is the basic technique to actually solve them, okay.

Â Basic introduction. This is a big field.

Â What I want to give you is the feeling about what this is about okay?

Â So, this is, this is an integer program. So, what you have to think is that it is

Â like linear program. Okay?

Â So, like linear programming. The only difference is that the variable

Â here, have all to be integers. Okay?

Â So, we still have, you know, an objective function that we are minimizing.

Â And we have n variables and m constraints.

Â Okay? So, you see the, you know you see the end

Â variables, you see the end constraint. All these constraints are inequalities

Â okay? So, the variables are constraints to be

Â non negative okay? Again okay?

Â And then we have these additional constraints that they have to be integer

Â values okay? And this difference, this makes a huge

Â difference in practice of course. Okay?

Â So, we'll talk about the mixed integer program when not all the variables have

Â to be integers. Some of them can and some, some are and

Â some aren't, okay? And that's essentially a mixed integer

Â program. So, we have some values that are like in

Â linear programming. They can take you know, rational values,

Â real numbers. Are and then some of them are going to be

Â integers. Okay, so and that's why you call mixed,

Â mixture of two variables. Okay, but once again, a big you know, a

Â big, big thing to mention here is that the constraints are linear, they're

Â qualities and of course we're minimizing. Of course all the techniques that I've

Â shown you for linear programming are still working, right?

Â Maximizing is easy to do. You minimize the negation.

Â Writing equations or you know, inequalities or equations, and, and you

Â know, equations are inequalities. Are going to be done exactly like we did

Â for linear problems. And of course if you want a variable

Â which can take negative value, you can always use a different set we have done

Â before. Okay?

Â So, most of the same technique, but what is key is that the constraints and the

Â objective function are linear. But now, some of the variables can be

Â integers. Okay?

Â So, the difference between having integer variables or not having integer variable

Â is big, right? So, that's the gap between being

Â polynomial and NP complete, okay? So, this is a huge difference, as soon as

Â you introduce this, this, this integer variables okay?

Â So, so the, the, the entire complexity of the problem changes and for most people

Â in industry sometimes, it's a big, it's a big difference right?

Â They say, oh but it should be easier there are fewer integers than there are

Â real numbers but no having integers is what makes this problem really complex.

Â Okay? So, this is an example that we have seen

Â before. Right?

Â The knapsack example. This is, this is a mixed integer program.

Â Okay? So, we have a linear objective function,

Â the variable xi are constrained to be 0 and 1.

Â And they have to be integer value. Want to take the item or not.

Â Okay? You know it's not a fraction value.

Â And of course we had this linear inequality.

Â Okay? So, that's the most simple in a sense.

Â Mixed integer program that we have okay? So, what we're going to do now is look at

Â warehouse location we see this problem before when we do local search.

Â And what we going to do is develop the mid program for it.

Â So, I'm going to go over what the problem is again and then we see how we design.

Â And mid program, actually we'll come back several time to this problem, okay?

Â So, what we had there is that we have various locations where we can open a, a

Â warehouse, okay? So, when we open a, a warehouse in a

Â particular location, we say we open that warehouse.

Â If we don't put it there it's basically, basically say that it's closed.

Â Okay? And then we have a bunch of customers

Â have, you know, around you know, all over the place.

Â Okay? And the goal is actually to choose which

Â warehouses to open so that you can serve all the customers.

Â And every one of the customer. Is,will be allocated to exactly one

Â warehouse okay? Now what you want to minimize is the cost

Â of opening these warehouses and maintaining them, that's the fixed cost

Â for every warehouse, warehouse. And then you have the transportation cost

Â from the customers to the warehouse okay? So, you will want to minimize the sum of

Â these two things okay? So that's the problem okay.

Â So, this one solution where you open one warehouse and you serve all the customers

Â with that warehouse okay? This is another solution where we open

Â two warehouses okay? And the customer are served you know, in

Â the way which is depicted here. But once again you know I want to focus,

Â I, I want to mention the fact that every one customer is assigned to exactly one

Â warehouse. Okay?

Â And this is another solution with two warehouses.

Â It's a different customer allocation. Okay?

Â So, now the question is can we find a MIP model okay?

Â For warehouse location. Okay?

Â And so once again, it's [UNKNOWN] constraint programs.

Â It's going to be easy, right? So, the only three things that we have to

Â do is find what are the decision variables.

Â And then we have to express linear constraints, because we are in a MIP

Â context in terms of this decision variable.

Â And then express the objective function, which is also to be linear.

Â Okay? So, can we find that for warehouse

Â location? Okay, and obviously the answer is yes.

Â Otherwise I wouldn't be talking about this problem, right.

Â So, what are the decision variables? Okay?

Â So, in this particular case we are going to use two types of decision

Â variables. Okay?

Â So, the first type is going to basically be associated with every warehouse.

Â Okay? So right, so this is what we want at the

Â end, we want to know. You know, which warehouse is to be open

Â okay? How many and where, okay?

Â So, we'll have a decision variable xw for every warehouse w.

Â And that variable is going to be one, if that warehouse is open, okay?

Â So, that means that we're building the warehouse at that location, okay?

Â It's zero if we just don't build the warehouse at that location, okay?

Â So, that's the first type of variables that's going to tell us where to build

Â these warehouses. And how many.

Â Okay? Now, we need a second thing, right?

Â So, we need for every customers to know which you know, warehouse it's going to

Â be allocated to. So, we are going to use a second set of

Â variable, which we call the Y variables here.

Â Okay? And Ywc is going to be equal to 1.

Â If customer c is basically served by warehouse w okay?

Â And zero otherwise. Zero would mean, okay custo, warehouse w

Â doesn't serve customer c. Okay?

Â So, once again two types of variables. Whether we open a warehouse or not that's

Â the x variables and the y variables is going to tell me for a pair of customer's

Â warehouse are these two guys connected? You know, is this warehouse serving that

Â customer? Okay?.

Â So, now we the decision variables the only thing that we have to do now is

Â express the constraints okay? So, what are the constraints okay?

Â So think about it. What are the constraints in this

Â particular setting, okay?. Now there is one constraint which is

Â really important that we don't want to miss is that we can serve a customer with

Â a warehouse. Only if that warehouse is open.

Â Right? So, we don't to connect a customer to a

Â warehouse that doesn't exist. Right?

Â So, that's a very simple constraint. Okay?

Â But it's very important. Otherwise you know, we don't, we wouldn't

Â have a solution. And the second one is that we want to

Â make sure that the customer is served by exactly one warehouse.

Â Okay? So, so we have to make sure that the

Â customer is served. But it can only be served by one

Â warehouse. Okay?

Â And so, essentially that's the second constraint, the second type of constraint

Â that we have to express. Okay?

Â Now, we have to express these constraints as linear constraints.

Â Okay? And that's what this slide is showing

Â you. Okay?.

Â So, the first constraint telling you that you know, a warehouse is to be opened to

Â serve a customer, is a simple inequality with two, the two, the two types of

Â variables okay? So, we are basically saying that, why the

Â value c has to be smaller or equal to xw, what does that mean okay?

Â So, if we close the warehouse, so that means that xw, if we close warehouse w so

Â that we don't build a warehouse, that means xw is 0 right?

Â If xw is 0 then it's for, you know y you know, wc is going to be forced to be 0,

Â we can't actually you know, serve that customer with that warehouse.

Â That's part of what we want, right? And if, if xw is 1, that basically means

Â that the warehouse is open, we can use it.

Â But that doesn't mean that we have to allocate customer c to that warehouse,

Â but we can. Okay?

Â So, that value is going to be 1, which basically means that the variable you

Â know, ywc it's free, it can take value 0 it can take value 1.

Â That's the meaning of that constraint. Okay?

Â So, in a sense we are encoding you know, the relationship between these decisions

Â using an inequality in this particular case.

Â And this works well because these two variables take 0, 1 values and we come

Â back to the fact that in a MIP a lot of the decision variables are going to be

Â 0,1. Okay?

Â The second constraint that you see there, is basically telling us that a customer

Â should be served by exactly one warehouse.

Â And what we're going to do is, we're going to look at one customer, we're

Â going to look at all the possible warehouse.

Â And we're going to make sure that the sum of the variables that are basically

Â denoting whether that warehouse is serving that customer, is equal to 1.

Â And that's what you see there, okay? So, you see a sum for all the warehouses.

Â And then you see the variable ywc okay? So, that basically means is y actu, is w

Â actually serving customer c? Okay?

Â So, it's the sum of all these guys and they have to sum to 1.

Â Okay? Once again, these guys are 0, 1 so that

Â basically means that one of these warehouses is going to serve that

Â customer. Okay?

Â So, beautiful we have a model, if we can express the objective function.

Â The objective function now is the sum of the fixed costs and the transportation

Â cost. Okay?

Â It's reasonably easy to state. What you do, this is essentially the

Â fixed part, okay? So, what we are doing is the sum for all

Â the warehouse of the fixed cost of the warehouse, okay, times whether the

Â warehouse is open or not. So, if the warehouse is closed, there is

Â no cost. If the warehouse is open will, you know,

Â incur a cost of cw. Okay?

Â 9:34

And then the second, the second aspect here is basically the transportation

Â cost. We look at the pair wc, and if, and if,

Â you know, w is serving c, that variable y is going to be 1.

Â So, we want to incur the transportation cost.

Â twc okay? So, that's we have to do, of course with

Â the variable y is 0 then essentially there is no cost incurred.

Â You know, these things are not connected. Okay?

Â So now, so this is the transportation costs okay?

Â So fixed costs, transportation costs. We have the objective function and we

Â have the entire model okay? So, this is the entire MIP model for

Â warehouse location. Okay?

Â Objective function, fixed cost, variable cost.

Â Okay? Then we have the constraints here, that

Â basically state that you know, you can serve a customer with a warehouse.

Â If the warehouse is, if, you can serve a customer with a warehouse if the

Â warehouse is open. Then you have to serve exactly 1 customer

Â uh, [INAUDIBLE] a customer exactly once, so you look at all the warehouse exactly

Â one should serve that customer. And then usually these two types of

Â variables have to be 0 and 1. Okay?

Â We open up a warehouse or we don't, we serve you know, the warehouse w serve

Â customer c or not. Okay?

Â And that's the entire model. Okay?

Â So, so its interesting when you look at these models that the variables here are

Â 0, 1. Okay?

Â And I'm going to come back to that, but we could have another essential model.

Â We could think of you know, instead of having all these 0, 1 variables, ywc for

Â all the warehouses, why don't I have simply a variable yc.

Â Which denotes the, the, the warehouse that is serving customer c, okay?

Â Now this is a question for you, okay? So think, you know, use, you know, try to

Â build a MIP model using a decision variable which is yc, okay?

Â And see what the challenges are. It's a very, it's a very interesting

Â exercise try to see what is going to be tough if you use these decision

Â variables, instead of those 0, 1 variables that I showed you, okay?

Â So, so to think [UNKNOWN] is that in general, MIP models love 0, 1 variables,

Â okay? So, you'll see a huge amount of 0, 1

Â variables in MIP model. You'll see typically very few integer

Â variables, okay? The integer variables are typically

Â going to be 0 and 1. So, you know, it's like you know, MIP

Â models like to have decision which are actually yes or no, okay?

Â So, very different from constraint programming model in that sense, okay?

Â But the good point is that once you have the 0, 1 variables expressing linear

Â constraints, is typically very simple, okay?

Â So, in a sense you know, you use these very simple 0, 1 variables and then all

Â the constrains. Are you know, simple to state in general.

Â Okay? So, so sometimes they have many indices,

Â but they are easier to state, they are easy to state okay?

Â So, it doesn't mean that you know modeling in MIP is easy.

Â And we'll see some of these issues later on so, there're are still many, many

Â possible models you can design. You know, what are the basic, what are

Â the basic decision variables that you are going to use?

Â What are the constraints? What are the objectives?

Â Everything that we said for constraint programming is still valid.

Â Finding a good model is not necessarily easy, okay?

Â And so, we'll see some of these things in, in a moment, actually, or the next

Â lecture. Okay, so now we have a beautiful MIP

Â model, how do we solve them? Now this has been an active research area

Â for many, many decades, and it's still a very active research area, it's a

Â fascinating area, okay? And so, one of the basic techniques to

Â solve MIP problems okay? Is to use branch and bound.

Â Okay? Remember, branch and bound, we saw that

Â you know, first lecture you have, or second lecture, with bounding, and

Â branching. Okay, bounding is what, is finding an

Â optimistic evaluation of the, of the objective function.

Â Optimistic, optimistic relaxation, we love relaxing, right?

Â And then branching is when, you know, is basically splitting the problem into sub

Â problems, and then apply the same algorithm on the sub products.

Â Okay? So this is the basic two steps.

Â Now what is nice about MIP models is that they have a very, very natural

Â relaxation, okay? Which is linear programming.

Â And that's why we talked about linear programming so much.

Â We have this linear relaxation that we can use as the bonding of as the bonding

Â part of MIP models, of branch and bond for MIP.

Â Okay? So, the key idea is that you going to

Â remove the integrality constraints, and you have a relaxation, okay?

Â So, this is the only thing that you do, right?

Â You look at this big MIP model. If you remove the integrality

Â constraints, what do you have? You have a linear program.

Â You can solve that very fast, and that's what we going to do, okay?

Â So, the bounding path, the optimistic relaxation, is going to be basically

Â relaxing the integer constraints. You wrap your largest set of values that

Â the variable can take, so it's a relaxation.

Â Okay? So, so basically the branch and bound for

Â MIP models is going to be solving the linear relaxation.

Â Okay? At a particular node, you're going to do

Â that, you're going to look at that node. You're going to say, oh, I want to solve

Â the linear relaxation. And if the linear relaxation is worse

Â than the best solutions you have found so far, you can prune that node, that, that

Â node away, okay? It basically means that the associated

Â sub problems to that node is provably sub-optimal.

Â You can get rid of it. Okay?

Â So, if not, okay, so if the, you, you are basically looking at this linear

Â relaxation more closely, and you are looking to see if it has all integer

Â values, which can happen, right? And if that happens, that means that in

Â this large space of solutions, you know, the linear programming relaxation, it

Â throws in an integer value for all of these variables.

Â And that means that you know, obviously it's a solution to the overall MIP

Â problem, well to, for that node, right? And so, essentially you have found a

Â feasible solution and you can obtain the best solution you have found so far,

Â okay? Now if none of these two things apply

Â then you need to branch, okay? And the interesting thing about MIP

Â models, is that they are going to use the linear relaxation to drive or to branch

Â in general. Okay?

Â And so there are many ways to do this, okay so I'm just going to mention one

Â here. But this is also a very active area.

Â How you branch, what do you use, how do you use, do you use the objective

Â function? Do you use the constraints, and so on.

Â But this is the simplest scheme that is using the linear relaxation.

Â So, what you're going to do is you're going to look at this relaxation, and you

Â know that, you know, some of the integer variables have a fractional value, and

Â they should not. Okay?

Â So, what you're going to do is take one of those, okay, which has, let's say, x

Â which has a fractional value f. And what you're going to do is create two

Â set problems. Okay?

Â One of them is going to be saying that x has to be small or equal to the integer

Â which is the smallest, the largest integer which is smaller than f.

Â Okay? So, that's the floor of, of f and okay

Â and that's one of the sub problems, okay? And the other sub problem is going to see

Â that x is greater than the seal of f, that's the smallest integer which is

Â larger than f. Okay?

Â So, these are two sub problems you can add to the, to the set of constraints

Â that you have and then you repeat the branch and bound.

Â Okay? So, the key point here, if you remember

Â constraint programming. Constraint programming was obsessed with

Â constraints, okay? And was basically branching on the, you

Â know, using feasibility information. Here we are obsessed with the objective.

Â We are getting these, good, you know, well, hopefully good.

Â Relaxation of the objective optimistic evaluation of the objective.

Â And then what we use is we use this linear relaxation which is like ooh, you

Â know, this is like a very good solution, okay?

Â But it's not feasible, and trying to use that for branching, okay?

Â So, in a sense we are branching using optimality information, okay?

Â So, summarizing what I just said. Okay, so we are basically focusing on the

Â objective here. Okay, once again I'm exaggerating, of

Â course the MIP system in practice. We'll also focus on some other

Â constraint, but it's good to exaggerate to, you know, drive the point home here.

Â So, we are basically focusing on the objective and the relaxation gives you an

Â optimistic bound all the best value that the MIP can ever get.

Â Okay? The pruning is based obviously on the

Â objective and sub-optimality. Okay?

Â So, when you detect the linear relaxation is not higher than the best solutions

Â you've found, not better than the best solution you've found so far, you just

Â prune it away, okay? And obviously the relaxation has been

Â obtained by relaxing the integer integrality constraints okay?

Â And we have a global view of all the constraints, you know.

Â So, in constraint programming all the constraints were acting, you know

Â independently. Here we relax all the integrality

Â constraints, but when we look at the constraints, we look at them globally.

Â Okay, so we have this linear program, which basically look at the constraints

Â globally, but they just have removed the integrality constraints.

Â Okay? So, in the Knapsack, in a sense, so when

Â you look at the relaxation, what we did, this is basically the MIP model for the

Â knapsack. And the relaxation is only looking at

Â this, you know, 0,1 that you see there, and basically replacing them by the fact

Â that oh, it doesn't have to take the values 0.

Â One, it can take any value between 0 and 1.

Â Okay? That's the linear relaxation and we can

Â do that for any MIP model right? So we just remove these integrality

Â constraints okay? Now, so in the case of the Knapsack, the

Â linear relaxation is the same as the greedy approach that I've shown you

Â right? Order the item by the most value per

Â weight okay, and select them in that order okay?

Â Until you get a fractional value and you take that fractional value.

Â to, to, to get a relaxation. So, that's what the linear programming

Â relaxation is going to do. So, I'm going to talk a lot about this

Â linear programming relaxation the next couple of, of classes.

Â It's a beast and, you know, you see it's, it's, it has very interesting behavior.

Â Okay? Now when we branch, we branch with a

Â fractional value, and essentially in this particular case of the Knapsack, the

Â fractional value is going to be the most. You know the, the item with the, the

Â largest ratio that we can't actually fit in the item.

Â We fit all of the items which are more valuable per weight, right, better ratio.

Â Okay? And then we are basically looking at that

Â last item that we would want to put into the Knapsack, but we could only put in a

Â fraction of it. That's essentially, you know, the, the

Â item we're going to branch on, because this is the only item which will have a

Â fractional value, okay? Now, the question that I have for you,

Â for you to think of at this point, is that ooh, you know, we're going to take

Â two things. These are 0,1 variables, right?

Â So, he's going to be, so that, that the, when we round the fraction downwards or

Â upwards, what you are going to get is essentially 0 on one side and 1 on the

Â other side. So, we are going to not take the item or

Â take the item. So, when we don't take the item, can you

Â think of what the linear relaxation is going to do, okay?

Â So, lot of the games in MIP is going to figure out what these [INAUDIBLE] linear

Â relaxation is going to do. And it's very capricious, you'll see.

Â Okay? So, in a sense here, you know, what I'm

Â asking you, is that if we don't take the item, what's going to happen.

Â Okay, or if we take the item what's going to happen to this linear relaxation, can

Â you figure that out? Okay?

Â So in a sense, the question that I'm asking you is, ooh which value is

Â going to become fractional if you don't take the item?

Â Okay, think about it. I decide not to take the item.

Â All the items that were more valuable, what's going to happen to them?

Â Well, you're going to still take them. And then what you're going to have is a

Â fractional value for one or more, well, yeah, one or more items, later on.

Â Okay? So that's what's going to happen.

Â If you don't take the item. If you take the item, what's happening?

Â Ooh, before you couldn't put this item in the Knapsack.

Â No you decided, I want to take that item. So, that means that some of the items

Â that were before, you won't be able to put inside the Knapsack, okay?

Â And so the, there is going to be a fractional value for an item that was

Â more value that's now going to be, that's now going to be fractional okay?

Â So, this is the kind of reasoning that sometimes you need to think about, you

Â know, what is the linear relaxation doing.

Â Because that will give you insight on how to actually improve your models, okay?

Â So, this is a very simple illustration because this is a really, really simple

Â example but it's going to drive two points home, okay?

Â And, and, you know, you'll see them. So, once again, so we had basically these

Â three, you know, this knapsack with three items, okay?

Â So, the value of the item is Vi, the weight of the item is Wi, your item 1, 2

Â ,3. Okay?

Â So, most valuable item is item 3. Okay, 35 divided by 3 is pretty good.

Â Okay? Above 10.

Â Okay? So, a 45 divided by 5 is what is 9 and 48

Â divided by 8 is some, it's around 6 right?

Â And so therefore, you know this is the most valuable item that is the next one

Â and this is the least valuable item from the weights, you know value ratio.

Â Okay? So, at the top of the, at the top of the

Â tree at the root node, what you have is a value of 92 and that value of 92 is

Â basically obtained by selecting item. Well, I think we can let the linear

Â relaxation do this and this is what the linear relaxation does okay?

Â So, the linear relaxation is going to take, you know, value, value one for x1,

Â I'm taking item 1, I'm taking item 3. Okay?

Â So, these are the two, you know, items with the best ratio.

Â And then I take only a fraction on the second item, because I can't put the

Â entire item in okay? And you know once again this is 5 and 3

Â this is 8 you know I can only take a quarter of that guy.

Â And this is all you get you know, this this value because you, you, you sum 45

Â and 35 that's 80 and then a quarter of 48 that's 12 that's the value of the linear

Â relaxation. So, this linear relaxing is basically

Â giving you this solution and that particular value.

Â Okay? What are we going to do?

Â Okay? I told you.

Â We look at this thing and we look, ooh, what is the fractional value?

Â And in this particular case, it's item x2 okay?

Â So, we're going to branch on x2, and we're going to create two sub-problems,

Â okay? One where x2 is 0 and one where x2 is 1.

Â Okay? What happens if x2 is 1, is 0, okay?

Â So, that's the node that you see over there.

Â Okay? Now the linear relaxation is 80 okay, you

Â can select only item 1 and 3. Okay?

Â So, we solved the linear relaxation at that node, and what do we obtain?

Â You know, x1 is equal to 1, x2 is equal to 0, x3 is equal to 1.

Â That's the linear relaxation. The linear relaxation here is finding,

Â you know, an integral solution. We know that we don't have to branch

Â anymore. We are fine, okay, this is the best we

Â would be able to do. It's all integral okay, so we don't have

Â to branch anymore. It's done, okay?

Â So, the linear relaxation here is telling you when to stop branching in a sense,

Â okay? So, we have found a solution we evaluated

Â okay? The other thing that we wanted to do, is

Â basically you know, choose x2 is equal to 1, okay?

Â So we basically fix the value of x2 to 1. We solve the linear relaxation again, and

Â the linear relaxation is going to give us a value of 71.33.

Â Okay? You see that now there is this value of

Â this x3 which is fractional, okay, it was actually, it was actually not fractional

Â in the original relaxation, okay? So, we know that this is, we would have

Â to continue branching, but what we have seen here is that the value of the inner

Â relaxation is 71. The best solution that I have [UNKNOWN]

Â so I can actually prune that node away already.

Â So, with one branching essentially I, I solved this problem optimally.

Â Okay? So, this is essentially illustrating the

Â various concept here. At the particular node you solve the

Â linear relaxation. Okay?

Â You look, the value of the linear, the objective function of the linear program

Â is basically giving you an optimistic evaluatoin of what the MIP can do.

Â Okay? At that particular node.

Â Okay? And then you look at the variables over

Â there and you take a fractional variable to actually branch okay?

Â And then you branch left and then you branch right based on that fractional

Â value. Okay?

Â Now, one of the questions that you have to think of is when is branching bound

Â going to be effective? Okay?

Â What is it's going to be really effective is the value of the linear relaxation is

Â tight. It's very, very close if the linear

Â programming relaxation is very close to the optimal value of the MIP.

Â Okay? Is that sufficient?

Â huh? Think about it.

Â So, if I tell you, I guarantee, okay, that this linear relaxation is going to

Â be 0.5%, always of the best value of the MIP.

Â Is that sufficient for the MIP to actually behave nicely all the time?

Â Okay? Question for you.

Â Okay? I wish we had quiz in this class.

Â We don't, but anyway. So think about it.

Â Is it sufficient? Okay?

Â So, what is a good MIP model? A good MIP model is going to be a model

Â which is a good linear relaxation. It's not sufficient necessarily.

Â Okay? I answered that question, right?

Â But this is a necessary condition, you want a model with a very good linear

Â relaxation. Why?

Â Because you're going to prune the search space much better.

Â Okay? Now, which variable should we branch on?

Â So, we have a lot of variables, it seems that we have plenty of variables with

Â fractional values. Which one are we going to branch on?

Â Okay? So, one of the good heuristic is to

Â branch with the most fractional value okay?

Â So why, why would we do that well, you know once again exaggerate.

Â Assume that you variable which is not very fractional okay?

Â So, it's like you know 0.99999999999 okay?

Â And it can take value 0, 1 okay? So, when it is 0.9999999 [UNKNOWN] I hope

Â I you know, use the same number of nines, but that means that it's really close to

Â one, probably branching there doesn't make too much sense.

Â If a variable is 0.5 what does that mean? It means that the linear relaxation is

Â saying, hm, maybe I should take this item hm, maybe I should not take this item hm,

Â I do not really know. Okay?

Â And so therefore these are the type of variables that you need to branch on.

Â Okay? So, this is a basic introduction to MIP.

Â Next time we are going to see really cute things, on how you can actually build MIP

Â model. What does it mean to be a good MIP model?

Â Okay? come back.

Â Thank you.

Â