0:00

Welcome back.

Â This is discrete optimization again, the knapsack problem.

Â And as you can see, I have a new hat. This is the branch and bound hat.

Â Something which is really useful, and going to be used

Â over and over again in this particular class, okay.

Â We're going to introduce branch and bond, and also the value of relaxation, okay?

Â Two very important concept in one lecture, but

Â still the lecture will be short, you'll see.

Â Okay so, this is the one dimensional knapsack,

Â we have seen it many times, you're probably start getting fed

Â up with this particular knapsack, but you going to see something beautiful today.

Â Okay, so, you know, remember when we talk about binary programming,

Â we talk about exhaustive search, okay, and in a sense we

Â have this, you know, decision variables on top and what we're

Â trying to find out is the values that are possible for this.

Â This, decision vibrance and what I've shown you during the

Â dynamic programming lecture is that they exponentially many of these guys

Â in terms of the number of items. Okay?

Â But how do we, this is a way to actually build this configuration okay.

Â You take the first item, and you have two decisions.

Â Okay.

Â Whether you take the item or whether you don't take the item, okay?

Â And now you have these two possibilities, and for these two possibilities.

Â You're going to look at the second item and you can

Â decide to take the second item or not to take it.

Â And once again, what you get now are four different configurations.

Â And now, you can do the same with the third

Â item, okay, and you get the final eight configuration, okay.

Â And every one of the configuration will

Â tell you whether you choose, you know, item

Â one or not, whether you choose item two or not, or item three or not.

Â And obviously I had four item we would have, you know, 16

Â of them, 5 item, 32, and this would grow, grow, grow very large.

Â What branch and body is about is looking at this tree and trying

Â to explore only a tiny part of it And finding the optimum solution

Â nevertheless, okay?

Â So that's what we're going to do, and we're going to show you how to do that.

Â And the key idea is that the iterative two steps, okay.

Â Branching and bounding. And branching is boring, right?

Â It's like the exhaustive search, except later on I will

Â tell you that there are smart ways to do this.

Â But at this point it's completely boring.

Â You know, it's like, okay, so I take an item and whether I

Â take the item or not, that's what branching is going to be about, okay.

Â It's like in the exhaustive search. But then bounding.

Â Bounding is very different.

Â It's like finding an optimistic evaluation Of what you can do, okay.

Â So in optimization you have to be optimistic, in life

Â as well right, so we want you to be optimistic.

Â In fact, what is the best that I could ever do, okay, if you are maximizing.

Â Or if you are minimizing, how low can be my cost, okay.

Â So that's the kind of optimistic evaluation

Â that we need for actually for bounding, okay.

Â And I'm going to show

Â you how we can get this. Okay.

Â So how do we find these things?

Â So we are going to basically take the problem and relax it.

Â And so optimization Is the art of relaxation.

Â Okay, so image yourself in this [UNKNOWN], and you are completely zen, and you find

Â out the best way you can actually relax these problems and make it easy to solve.

Â Okay?

Â And get and object, and get an optimistic evaluation of this thing.

Â Okay, so once again This is the

Â knapsack, and you ask yourself, [INAUDIBLE],

Â okay, now, relax this [UNKNOWN], okay?

Â And there are not that many things in this notation, right?

Â So, in this formulation. And so, you may say, oh.

Â with what I could relax is this constraint, okay?

Â We're going to relax the capacity constraint, okay?

Â And then, as soon as we do that, usually,

Â we can get an optimistic evaluation of this problem.

Â 3:39

evaluation, okay?

Â So, what you going to see is that you're going to see

Â these node in the tree that I'm going to build, okay?

Â They have three entries, okay?

Â They have the value of that I have accumulated inside the knapsack.

Â They have the space left in the knapsack.

Â And then they have this optimistic evaluation, okay?

Â So initially I have accumulated no value.

Â I have, you know, the full space in my knapsack.

Â 10 in this particular case.

Â And then I have this optimistic evaluation which is oh, I can select everything okay,

Â so in this particular case is 128. And now I start doing things.

Â Right?

Â I'm selecting the first item, okay.

Â As soon as I do that I computed the value of the first item, 45.

Â I take 5 units of space in the, in the knapsack.

Â And my, you know, optimistic evaluation is still 128.

Â You know, then I try to select item 2 but as you can see

Â item 2 is a size of 8 so I'm exceeding the capacity of the knapsack.

Â That's not a feasible solution, okay, so I can't take item 2.

Â And at that point my optimistic evaluation is reduced to 80, okay?

Â And I can decide to take item three or not, okay?

Â If I take item three, I get the value of 80, okay, this is one and three.

Â And if I don't take it, I get a value of

Â 45, which is essentially completely dominated

Â by the previous solution I found.

Â Okay, so I know that this is not an optimal solution.

Â Okay.

Â Now I go back to the next part of the tree.

Â I decide not to select item 1 so

Â my optimistic evaluation at this point is 83.

Â It's still better than this guy, okay, 18, the best solution that I've found.

Â So I keep going.

Â Okay. I select item 2.

Â If I select item 2 I'm still 83.

Â Okay, then I decide if I select item 3 or

Â not Okay, in the first case I get an unfeasible solution.

Â Otherwise I get the value 48, which is also

Â dominated by the best solutions that I have found.

Â So, you know, this is why I put this cross over there.

Â Otherwise if I decided not to take item two, okay?

Â So, what I get is a configuration there, okay?

Â Which is optimistic, whose optimistic evaluation is 35, okay?

Â Now that's terrible right? So it's worse!

Â That the best solutions that I have found so far.

Â So, I can decide without continuing anything, now this is not valuable, okay?

Â I don't have to actually branch on item 3.

Â I already know that the best I could do is worst than the

Â best solution that I had. And this is the key idea behind bounding.

Â Once you have this optimistic evaluation, if it's worse than the best solution you

Â have found, you know that you don't have to explore whatever is below the tree.

Â Now this is an example which is terribly bad, alright?

Â Because my optimistic evaluation is so optimistic that most of

Â the time I'm going to basically explore the entire tree, okay?

Â But so you can see sometimes we get a little bit of pruning, okay?

Â But what we have to do is to find a better relaxation

Â so that We can prune a bigger part of this tree, okay?

Â So, look again at this knapsack example and think, okay?

Â What else can I relax?

Â Okay, I let you think for about, ten minutes I guess, okay?

Â And then I'll tell you, okay? Can we relax something else?

Â And you're going to see a beautiful slide, okay.

Â So we are going to assume that

Â instead of packing artifacts, we are packing bars of Belgian chocolate, okay.

Â Now this is completely different, right.

Â So because now, you know, when you have Belgian chocolate.

Â Let me give you a picture, okay.

Â So, because this is so beautiful I'm already hungry here.

Â Okay, so and, and here I can tell you that Indian

Â Jones loved Belgian chocolate, that's the best chocolate in the world.

Â By far, okay.

Â So this is this bar of chocolates, okay. Now what Indiana Jones can

Â say, mm, oh i can't pile the whole thing in my knapsack, but

Â I can break it into piece and put it in my knapsack, right.

Â So this is completely different, right.

Â It's not I take, or I don't take.

Â I can take a piece of it, right. And so now, assume that my old knapsack.

Â Is built of artifacts that I can break. Like you know, Belgian chocolate, okay?

Â So then I get this beautiful [UNKNOWN], right?

Â So, the only thing

Â that I did was for the decision variables. They don't have to be zero one.

Â I take it, or I don't take it, black and white, ying and yang, right?

Â Only, what I'm doing here is saying, ooh, but I

Â can take a fractional value of these particular decision variables.

Â I can take a quarter of it, or I can take half of it, or something like this, right?

Â And so I get this optimization problem here, which is almost the same as before.

Â Except that the value of the, of the decision variables

Â now can take any fraction between zero and one.

Â Okay, and now you can ask, oh, but, you know, is that problem easy to solve?

Â Okay, well actually this problem is called a linear relaxation.

Â You're going to see that in this class all the time, okay.

Â Especially when we start looking at, you

Â know, mathematical programming, linear programming, mixed integer programming.

Â And so this is a very, very important concept.

Â So this says, a linear art, a linear relaxation of the knapsack.

Â And this has a very important properties that in practice it's easy to compute.

Â And I'm going to show a very easy way for the knapsack.

Â In practice it's a little bit more complicated.

Â We'll, we'll cover that as well, okay?

Â So the only thing that the linear relaxation is

Â doing is take every value that To be a.

Â A natural number, so an integer numbers.

Â And relaxing them to a fraction.

Â That's what the linear relaxation is doing, is basically

Â relaxing the integrality constraints that you have in most of

Â these models, okay? So, now how can we solve this?

Â I'm going to show you a very simple way to do this.

Â And this is going to you know, basically be very close

Â to some, some of the greediogories in that we have seen.

Â So we going to order the item By value per kilos.

Â Okay.

Â So this is this WI you know, over you know VI over WI ratio that I, that

Â I've shown you before, okay. So basically the item that is the largest

Â value Okay, it is essentially the va, item

Â that is the most valuable by kilo, and when

Â you look at chocolate that's, that's what you want

Â right, 'cos you can start breaking them up okay?

Â So essentially you alter the item by this and what you

Â have to do is basically select all the item okay, until the

Â capacity is, select all the items until the capacity is exhausted, putting

Â another item with, go over the capacity and for this And once

Â you have that you select the item and select

Â a fraction of it so you fill the knapsack.

Â So in a sense you order these items in the greedy

Â algorithm you accumulate them if the next one is going to

Â go over capacity what you do is take a fraction of

Â it such that, that particular fraction is going to fill the knapsack.

Â And this is essentially an optimal solution of this linear relaxation Okay.

Â So look

Â at this in this particular example, okay.

Â So, you see the various, you know, you see item one, item two, item three, okay.

Â So the first item is 45 over 5, okay, and

Â that gives you a value of 9 for that particular item.

Â The second one is 48 over 8, okay, and that gives you a value of 6.

Â And the third on is that?

Â It's 35 over 3 and that gives you a value of around You know 11.7.

Â So the item number three is actually per, y'know per

Â kilo the most valuable item. And then next one and next two.

Â So what you would do here is you would select item one, you would

Â select item two okay, that gives you two units for the capacity of the knapsack.

Â and then you essentially take a quarter because

Â the weight of item two is eight okay.

Â So you take a quarter of the value of this guy going to

Â give you umm which is going to give you a quarter of that a

Â value of 12 and the overall value of

Â the linear estimation is going to be ninety two.

Â So ninety two in this particular case is an optimistic evaluation Of the knapsack.

Â You know for sure that you will never do better than 92.

Â Okay?

Â So, but this is much better than the relaxation that we had before,

Â right, where we would basically sum

Â everything, okay, and get 130 or something.

Â Right, so here, we have actually a value

Â which is actually pretty good, okay, in this particular case, okay.

Â So, essentially the linear relaxation here is giving

Â you a very good value for the, for bonding.

Â particular knapsack problem. Okay?

Â Now why is this correct?

Â This is a simple reformulation of, of the, of this linear relaxation, okay?

Â So you basically express, you introduce a

Â new variable YI, which is basically the product

Â of VI and XI okay?

Â And then you reformulate the knapsack and what you see when you reformulate

Â this knapsack is that all the item Have the same, you know, value.

Â Okay, they have a value 1, okay.

Â But what is interesting when you reformulate this problem is

Â one, is not only the values are all the same,

Â but then this, this, this weight is basically the density,

Â the inverse of the density that I was talking about, okay.

Â The most

Â valuable item have the least weight in this reformulation.

Â Okay.

Â So how do you solve it?

Â Well, it's like the greedy algorithm that we have seen before.

Â You take the smallest item first And when you get

Â stuck you take the fractional value of the last one.

Â That's why, in this particular case, the linear relaxation is actually,

Â is actually well, the algorithm that I've given, that I've shown you.

Â Select all the items until you get stuck and

Â then a fraction is actually computing the linear relaxation.

Â But of course,

Â in practice, okay, the linear relaxation is not a solution to a problem.

Â Right?

Â Because we have this mask and there is no way we can break it,

Â and anyway, even if you could break it you would not want to break it.

Â So essentially we can't take a fraction part of this

Â mask right, we can't decide to just take the nose or

Â whatever right, so we have to do, we have to

Â use this linear relaxation, ok inside the branch and bound algorithm.

Â And that's what we going to do, okay?

Â So we start again at the root, okay? There is nowhere else to start.

Â But we know that the evaluation, the optimistic evaluation here is 92, okay?

Â Now we start, we go on the left, okay?

Â So we take the items one. Same evaluation, you know?

Â Fewer space in the line. Less space in the knapsack.

Â We take item two, this is [INAUDIBLE] feasible.

Â You know?

Â This is essentially the same as before. We get this solution with [UNKNOWN].

Â The other one with 45 is dominated. But no.

Â Now you're going to see something

Â really, really cute, okay.

Â So you go under, you, you go on the right, okay, you don't select item 1.

Â And what you see there now is that the optimistic evaluation is 77.

Â