0:00

Okay, welcome back, discrete optimization, knapsack.

Â So I want to go back to the branch and

Â bound, and talk a little bit about search strategies.

Â I put something under the rug, and I want to cover them during this lecture.

Â And you're going to see, some of these things

Â are going to be very interesting later on, okay?

Â But you need, you need to understand them.

Â It's going to build on branch and bound, but just refine the kinds of thing,

Â actually expand the kinds of things that you can do in branch and bound.

Â Okay? Remember, branch and one we are working

Â with real artifacts that we can break but

Â still we are using the linear relaxation in general,

Â or relaxation in general to actually compute good, you

Â know, optimistic episodes of the best you can do.

Â Okay?

Â So once again the key idea between branch and bound is that you look at

Â this exhaustive search, and you want to explore a very tiny piece of this tree.

Â Okay?

Â And you do that with basic, you know, relaxation ID.

Â Okay?

Â Now one

Â of the things I didn't talk about last time is

Â that you can search that tree in many different ways.

Â So, we saw that first branch and bound and I will talk about a little bit

Â about it again such that to refresh your

Â memory, but there are other things you can do.

Â And some of them are, you know, code best, you know, best, best search, best-first

Â search or, you know, limited discrepancy search,

Â and that's what I want to cover today.

Â Okay I want to give you an idea that, you know, there

Â is one thing that you need to consider when you solve a problem.

Â It's also

Â how you explore that, that particular, that particular trait.

Â There are many others, and you will see more in this class, okay?

Â But at this point, I just want to give you some of

Â the ideas on what you can do, and compare them, okay?

Â So that first search, you explore your, your,

Â your, no, that's what we saw last time.

Â You explore your tree in a tree in that first fashion,

Â and you prune the search tree as soon as you come

Â back to a node, and that node is an optimistic evaluation

Â which is worse than the best solutions you have found, okay?

Â That's depth-first search,

Â okay? Now we going to see best-first search.

Â And the key idea there is that you want to

Â select at any stage the node which has the best evaluation.

Â Okay.

Â So that's what best-first is.

Â And we'll see a limited discrepency search where

Â what you will do is trusting a greedy heuristic.

Â And you'll be, be following these heuristic and then allowing you

Â to move away from the heuristic in a very systematic fashion.

Â So I'll go into the details of these two things, okay?

Â So let, you know I just want to remind you of that first search first, okay?

Â And we are using the good eval-, the

Â linear relaxation here for the knapsack problem, okay?

Â And remember what we did was basically exploring these tree, we

Â would find a good solution there, with a cost of 80.

Â And then we would go back to the other part

Â of the tree here, where we're done selling the first item.

Â And you see that the optimistic evaluation here is worse than

Â the best solution that we have found, and therefore you can

Â prove this entire tree.

Â Okay so that's what I told you before, in that first

Â search, when you have an optimistic evaluation of a note, you compare

Â it to the best solutions that you have found so far,

Â and if it's worse then you just throw the entire sub-tree away.

Â Okay? So that's what that first search is about.

Â Okay?

Â So in a sense, you know, the key idea for that

Â search is like in the NFL, you want to go deep.

Â Okay?

Â And, and, and try to find good solutions quickly.

Â Okay, when does it prune?

Â It prunes as soon as you're reach a note

Â which who's evaluation is worth in the best found solution.

Â Okay?

Â And one of the questions I have for you

Â is, how efficient is it from a memory standpoint, okay?

Â And I'm going to always ask you questions like this during this class, okay?

Â And the key anyone you want to answer them is, you have to exaggerate, okay?

Â So, just assume, you know,

Â extreme cases and try to see all these things are working, okay?

Â So, how much space, you know, is a branch

Â and bound, is that first branch and bound going to take?

Â Okay, can you find, can you, can you find the answer to that?

Â Well it's very simple, right?

Â You always go left, left, left until you get stuck.

Â Okay?

Â So essentially the number of notes you can accumulate in memory

Â of one time is essentially the length of that particular branch,

Â which is this particular case is the number of items you can put in an upside.

Â So yes, it's actually pretty memory efficient because

Â you look at everyone of the item, you

Â have to do that because you have to find out whether you select them or not.

Â And that's what you select inside a particular branch.

Â Once you have, you know, explored that branch you

Â can remove some of these notes, and you would

Â select other ones by sticking the other branches, but

Â you always have essentially one branch at any one time.

Â 4:13

Okay?

Â So, lets look at the next technique which is best-first branch and bound.

Â And to illustrate this, I'm going to use a

Â very, very simple relaxation, the one where we remove

Â the capacity constraint, because that allows me to express,

Â you know, illustrate a concepts in a better fashion.

Â Same, you know, same ratio of, you know, values weight that done before.

Â Same capacity for nap sack. Okay?

Â And once again all the notes, the values that I've accumulated, the space left in

Â the knapsack and the optimistic evaluation, okay.

Â So, initially best research, look at the

Â particular item, and then generate two sub-problems, okay.

Â Whether you select the item or you don't select the item, okay.

Â And now that first search, or best-first search, is

Â going to look at all the notes which are there, okay.

Â And select the one, okay, which is not close.

Â Which means we haven't explored, you know, its children yet.

Â Okay, which is the best evaluation?

Â And in this particular case, we take the one where we selected

Â the item, and which has a value of 128, right?

Â And we basically explore its, you know, generate its two children, okay?

Â One of them is infeasible, and the other one is a value 80, okay?

Â And now we basically have to consider

Â these three notes, one of which is infeasible.

Â So we have essentially to consider these two.

Â And we take the one which has the best evaluation, which is this one.

Â Okay, we expand the children again, okay, and what

Â we get now is these three notes, okay, which have

Â evaluated tree and, and then 35.

Â And once again we take the one which has

Â the best evaluation, 83, okay, and we expand it.

Â And we get a feasible solution there which has

Â a pretty bad value of 48, and an infeasible solution.

Â Okay, so at that point once again, you know, we will be basically evaluating,

Â taking the best note, the note which has the best evaluation for expansion, okay?

Â So it's going to, so this note we already know,

Â but I'll come back to that in a moment.

Â We already

Â know that we would never select it right?

Â Because it's worth in the best solution that we have,

Â but we expand this one and we find two solutions.

Â One with value 80 and one with value 45.

Â Okay, and this is the best solution that we have.

Â At this point we look at the best, you

Â know, note, which is not expanded inside a tree.

Â We see this guy, but you see this guy doesn't have to be expanded, which is

Â already worse than the best solution we have, we have found so far, so we can stop

Â the exploration at that point. Okay?

Â So that's best for search. Okay.

Â You always look at the note which has the

Â best optimistic evaluation that is the one that you select.

Â You expand, you know its children, and then you repeat the process

Â select the one which is the best evaluation and keep doing this, okay?

Â So, in a sense best-first search is always greedy, right,

Â always go for the best, okay, and expand that one, okay?

Â Most of the time when you expand that particular note,

Â you know, they will, their value may go up and therefore you

Â may go down and therefore you don't select that many more, okay?

Â When is best-first search pruning?

Â It's essentially pruning when, you know, you

Â tried all the nodes that you can expand

Â if a value which is worst than the best solutions that you have found so far.

Â At that point, you have all these nodes which are floating around.

Â They've not been expanded, okay, but it makes no

Â sense to actually explore any of them, so you stop.

Â And that's,

Â you know, when the computation stops. Now, is it memory efficient?

Â Think about it, okay, once again.

Â Exaggerate. Okay?

Â How would we be exaggerating in this particular problem?

Â Okay? Well we can be just city.

Â Right, suppose that, you know, the, the

Â optimistic evaluation is going to be amazingly optimistic.

Â It's going to give you plus infinity all the time.

Â So if you do that, how much space

Â are you going to use in best-first branch and bound?

Â Think about it, okay?

Â Essentially what you going to do is, take an oath.

Â They have all the same value, infinity.

Â You're going to expand one, okay?

Â And then you're going to expand the other one, and so on.

Â And you will have to expand all of them.

Â So, you're going to store the entire tree inside

Â memory, which is kind of a disaster, right?

Â Because normally you will take exponential time,

Â but you will also take exponential space.

Â And space is actually one of the main limits in computers these days.

Â Okay, so,

Â you know, you know this is, this is the

Â worst case for you know a best-first branch and bound.

Â Know you may ask, when is best-first branch and bound really good?

Â Okay? And once again, exaggerate, right?

Â And we can do the exact opposite exaggeration, right?

Â So you have the perfect evaluation, okay?

Â If you have the perfect evaluation, okay, then

Â you will select the minimal number of notes.

Â Okay, so you see a little

Â bit about what's best for branch and bound.

Â If the relaxation is really good, then, the, the, what,

Â the number that you explore would be little, will be,

Â you know, very small otherwise you, you will explore a

Â large number of notes, and these notes are inside memory.

Â Okay.

Â Now, let me talk about the last search technique that

Â I want to talk about today, which is limited discrepancy search.

Â It's actually a very interesting and amazing heur-, you know search strategy.

Â And it assume

Â that you have a good heuristic, okay.

Â Assume that you have a greedy algorithm which is really good, okay

Â and that it makes very few mistakes okay, that's what the issue.

Â The heuristic is making very few mistakes we assume that the search tree is

Â binary, okay, and following that heuristic mean,

Â its always going left, left, left, left.

Â Right, so you select item, you select the item, you select the item.

Â Okay, and branching right means the heuristic is making a mistake.

Â Okay.

Â So if you do this, okay, this is what,

Â you know, limited discrepancy search does, okay.

Â It wants to avoid mistake at all costs, okay.

Â So we basically explore a search space by increasing number of mistakes.

Â We first explore something which has, you

Â know, zero mistakes, one mistakes, two mistakes.

Â Okay, and what we're doing there is really trusting the heuristic lesson last.

Â Okay.

Â We start by trusting the heuristic, trying to avoid mistakes, and then

Â allow a certain number of mistakes and then more and more mistakes

Â until we don't trust the heuristic. Okay.

Â So in a sense the way to view this is that limited discrepancy search can be viewed

Â as a technique which is basically exploring the

Â search, this entire tree, through a bunch of waves.

Â The first waves assume that you make no mistake.

Â The next wave, you know, wave number one, assume that you can make one mistake.

Â And then wave number two, two mistakes, and then so on and so forth.

Â Okay? So let me show you that on

Â the exhaustive tree. Okay?

Â So you start at the root of use tree, and then you assume that you make no mistake.

Â You follow the heuristic all the time.

Â That means what?

Â Branching left, left, left, okay, and that's what you see there.

Â This is, you know, the first wave that you see there.

Â Okay.

Â Now the second wave is going to lower you form one mistake.

Â Okay.

Â So that basically means that you can only branch once, on the right.

Â Okay.

Â So for instance you can go here, okay, but then you would have to go left,

Â left, left, left.

Â Okay, and this is what you see in the first wave.

Â Okay, you go left, you go right and then left, left, left, or you go left and

Â then one right and then only left or left, left and then one right over here, right.

Â So this is what you explore in the second wave,

Â and there is something which is really interesting here, right?

Â So, this is half of the search space, this is the second half of the search space.

Â And what you can see here is already, in wave, in the second wave,

Â you are already exploring one particular configuration, which

Â is in the second half of the search space.

Â Which depth first search would never do, until, you know, it takes, it,

Â it, it has explored the entire first half of the search space, right?

Â And this is one of the interesting things about limited discrepancy search.

Â It actually probes the search space in many different places.

Â Okay?

Â Then the third wave is going to be makes

Â two mistakes that makes, that means that you can

Â take two rights and one left. Okay, and that's what you see there.

Â And that point we have almost explored everything.

Â Okay.

Â And then essentially the last web is going to explore this guys on the left.

Â Okay, so let me show you this example of the knapsack example as well.

Â Okay.

Â So we start from the configuration, okay, in wave one, we start from the root.

Â And we branch left only, boom, boom, boom.

Â And in this particular case, it's kind

Â of annoying, because you get into infeasibility

Â very quickly. Okay?

Â The first wave, okay, is basically, you can do one right.

Â And this is what you get, okay?

Â And, so you basically make one right there, with, you,

Â you make a mistake for, you know, refer back to the

Â heuristic, you let the heuristic make a mistake, and then you

Â go left, or you go right, there, and go left again.

Â And at that particular point, you find the optimal solution already.

Â Okay?

Â And then in wave three, you explore two mistakes for every one.

Â And most of these things are going to be

Â rejected, of course, because we have already the optimal solution there, okay?

Â So, limited discrepancy search.

Â In summary, trust the greedy heuristic.

Â When does it prune? Think about it.

Â When does it prune?

Â Why, it's exactly like in that first search.

Â When you see a note, okay, it does an evaluation

Â which is worse than the best found solution so far.

Â You know that you don't have to explore

Â it, so it prunes like in best-first search.

Â And no, here is an interesting question, okay?

Â Is it memory efficient? Okay.

Â So, compared to depth-first and best-first search, is it memory efficient?

Â Okay. Now this is a very interesting question.

Â You know why?

Â Because I even told you really how to implement this, okay.

Â And depending upon the way you would implement this search, okay.

Â There will be an interesting trade off between space and time, okay.

Â You can implement it very efficiently,

Â but it will take a lot of space.

Â Or you can implement it by doing redundant work and it will take minimal space.

Â So the implementation can be between

Â the best-first and the depth-first search approach.

Â Okay?

Â So think about it and think how you would do this.

Â This is a very interesting topic.

Â Now limited discrepancy searches on beautiful application is scheduling.

Â We'll come back to that and rounding as well.

Â So it's a very interesting technique when you have

Â a good heuristic, when you know what you're doing.

Â Okay? So,

Â let me conclude here.

Â So, we have seen search strategies here

Â and we have two class time about relaxation.

Â Okay?

Â The big idea last time was relaxation. The big idea today is search, okay.

Â Now how do you choose between them, okay.

Â This is a very interesting topic in optimization.

Â This is one of the most important topic.

Â You can find really strong relaxation, but they will be so slow that

Â you will explore a very small tree, but it take a long time.

Â Or you can have a relaxation, which is really strong.

Â The trees and a really fast computer.

Â And, the trees the size of your search base substantially.

Â So, you have to find, you know the right relaxation and

Â the right part of the search tree that you will explore.

Â This is a very interesting topic. Okay?

Â Most of this is going to be problem specific like everything in optimization.

Â Okay?

Â And finally, you know, there are many other strategies.

Â And, you know, I really encourage you to actually think outside the box, and

Â think, if you can find all the kind of strategies that are going to be good.

Â Okay? Think about it, okay?

Â So, at this point, this is very interesting.

Â We have covered, you know the knapsack problem entirely.

Â Okay.

Â It's time for you to start the assignments, okay, and we'll

Â go into more sophisticated techniques in the next couple of lectures.

Â Thank you very much guys.

Â Okay, so it's time to actually go to work and actually solve these assignments.

Â Bye.

Â