Okay?

So, lets look at the next technique which is best-first branch and bound.

And to illustrate this, I'm going to use a

very, very simple relaxation, the one where we remove

the capacity constraint, because that allows me to express,

you know, illustrate a concepts in a better fashion.

Same, you know, same ratio of, you know, values weight that done before.

Same capacity for nap sack. Okay?

And once again all the notes, the values that I've accumulated, the space left in

the knapsack and the optimistic evaluation, okay.

So, initially best research, look at the

particular item, and then generate two sub-problems, okay.

Whether you select the item or you don't select the item, okay.

And now that first search, or best-first search, is

going to look at all the notes which are there, okay.

And select the one, okay, which is not close.

Which means we haven't explored, you know, its children yet.

Okay, which is the best evaluation?

And in this particular case, we take the one where we selected

the item, and which has a value of 128, right?

And we basically explore its, you know, generate its two children, okay?

One of them is infeasible, and the other one is a value 80, okay?

And now we basically have to consider

these three notes, one of which is infeasible.

So we have essentially to consider these two.

And we take the one which has the best evaluation, which is this one.

Okay, we expand the children again, okay, and what

we get now is these three notes, okay, which have

evaluated tree and, and then 35.

And once again we take the one which has

the best evaluation, 83, okay, and we expand it.

And we get a feasible solution there which has

a pretty bad value of 48, and an infeasible solution.

Okay, so at that point once again, you know, we will be basically evaluating,

taking the best note, the note which has the best evaluation for expansion, okay?

So it's going to, so this note we already know,

but I'll come back to that in a moment.

We already

know that we would never select it right?

Because it's worth in the best solution that we have,

but we expand this one and we find two solutions.

One with value 80 and one with value 45.

Okay, and this is the best solution that we have.

At this point we look at the best, you

know, note, which is not expanded inside a tree.

We see this guy, but you see this guy doesn't have to be expanded, which is

already worse than the best solution we have, we have found so far, so we can stop

the exploration at that point. Okay?

So that's best for search. Okay.

You always look at the note which has the

best optimistic evaluation that is the one that you select.

You expand, you know its children, and then you repeat the process

select the one which is the best evaluation and keep doing this, okay?

So, in a sense best-first search is always greedy, right,

always go for the best, okay, and expand that one, okay?

Most of the time when you expand that particular note,

you know, they will, their value may go up and therefore you

may go down and therefore you don't select that many more, okay?

When is best-first search pruning?

It's essentially pruning when, you know, you

tried all the nodes that you can expand

if a value which is worst than the best solutions that you have found so far.

At that point, you have all these nodes which are floating around.

They've not been expanded, okay, but it makes no

sense to actually explore any of them, so you stop.

And that's,

you know, when the computation stops. Now, is it memory efficient?

Think about it, okay, once again.

Exaggerate. Okay?

How would we be exaggerating in this particular problem?

Okay? Well we can be just city.

Right, suppose that, you know, the, the

optimistic evaluation is going to be amazingly optimistic.

It's going to give you plus infinity all the time.

So if you do that, how much space

are you going to use in best-first branch and bound?

Think about it, okay?

Essentially what you going to do is, take an oath.

They have all the same value, infinity.

You're going to expand one, okay?

And then you're going to expand the other one, and so on.

And you will have to expand all of them.

So, you're going to store the entire tree inside

memory, which is kind of a disaster, right?

Because normally you will take exponential time,

but you will also take exponential space.

And space is actually one of the main limits in computers these days.

Okay, so,

you know, you know this is, this is the

worst case for you know a best-first branch and bound.

Know you may ask, when is best-first branch and bound really good?

Okay? And once again, exaggerate, right?

And we can do the exact opposite exaggeration, right?

So you have the perfect evaluation, okay?

If you have the perfect evaluation, okay, then

you will select the minimal number of notes.

Okay, so you see a little

bit about what's best for branch and bound.

If the relaxation is really good, then, the, the, what,

the number that you explore would be little, will be,

you know, very small otherwise you, you will explore a

large number of notes, and these notes are inside memory.

Okay.

Now, let me talk about the last search technique that

I want to talk about today, which is limited discrepancy search.

It's actually a very interesting and amazing heur-, you know search strategy.

And it assume

that you have a good heuristic, okay.

Assume that you have a greedy algorithm which is really good, okay

and that it makes very few mistakes okay, that's what the issue.

The heuristic is making very few mistakes we assume that the search tree is

binary, okay, and following that heuristic mean,

its always going left, left, left, left.

Right, so you select item, you select the item, you select the item.

Okay, and branching right means the heuristic is making a mistake.

Okay.

So if you do this, okay, this is what,

you know, limited discrepancy search does, okay.

It wants to avoid mistake at all costs, okay.

So we basically explore a search space by increasing number of mistakes.

We first explore something which has, you

know, zero mistakes, one mistakes, two mistakes.

Okay, and what we're doing there is really trusting the heuristic lesson last.

Okay.

We start by trusting the heuristic, trying to avoid mistakes, and then

allow a certain number of mistakes and then more and more mistakes

until we don't trust the heuristic. Okay.

So in a sense the way to view this is that limited discrepancy search can be viewed

as a technique which is basically exploring the

search, this entire tree, through a bunch of waves.

The first waves assume that you make no mistake.

The next wave, you know, wave number one, assume that you can make one mistake.

And then wave number two, two mistakes, and then so on and so forth.

Okay? So let me show you that on

the exhaustive tree. Okay?

So you start at the root of use tree, and then you assume that you make no mistake.

You follow the heuristic all the time.

That means what?

Branching left, left, left, okay, and that's what you see there.

This is, you know, the first wave that you see there.

Okay.

Now the second wave is going to lower you form one mistake.

Okay.

So that basically means that you can only branch once, on the right.

Okay.

So for instance you can go here, okay, but then you would have to go left,

left, left, left.

Okay, and this is what you see in the first wave.

Okay, you go left, you go right and then left, left, left, or you go left and

then one right and then only left or left, left and then one right over here, right.

So this is what you explore in the second wave,

and there is something which is really interesting here, right?

So, this is half of the search space, this is the second half of the search space.

And what you can see here is already, in wave, in the second wave,

you are already exploring one particular configuration, which

is in the second half of the search space.

Which depth first search would never do, until, you know, it takes, it,

it, it has explored the entire first half of the search space, right?

And this is one of the interesting things about limited discrepancy search.

It actually probes the search space in many different places.

Okay?

Then the third wave is going to be makes

two mistakes that makes, that means that you can

take two rights and one left. Okay, and that's what you see there.

And that point we have almost explored everything.

Okay.

And then essentially the last web is going to explore this guys on the left.

Okay, so let me show you this example of the knapsack example as well.

Okay.

So we start from the configuration, okay, in wave one, we start from the root.

And we branch left only, boom, boom, boom.

And in this particular case, it's kind

of annoying, because you get into infeasibility

very quickly. Okay?

The first wave, okay, is basically, you can do one right.

And this is what you get, okay?

And, so you basically make one right there, with, you,

you make a mistake for, you know, refer back to the

heuristic, you let the heuristic make a mistake, and then you

go left, or you go right, there, and go left again.

And at that particular point, you find the optimal solution already.

Okay?

And then in wave three, you explore two mistakes for every one.

And most of these things are going to be

rejected, of course, because we have already the optimal solution there, okay?

So, limited discrepancy search.

In summary, trust the greedy heuristic.

When does it prune? Think about it.

When does it prune?

Why, it's exactly like in that first search.

When you see a note, okay, it does an evaluation

which is worse than the best found solution so far.

You know that you don't have to explore

it, so it prunes like in best-first search.

And no, here is an interesting question, okay?

Is it memory efficient? Okay.

So, compared to depth-first and best-first search, is it memory efficient?

Okay. Now this is a very interesting question.

You know why?

Because I even told you really how to implement this, okay.

And depending upon the way you would implement this search, okay.

There will be an interesting trade off between space and time, okay.

You can implement it very efficiently,

but it will take a lot of space.

Or you can implement it by doing redundant work and it will take minimal space.

So the implementation can be between

the best-first and the depth-first search approach.

Okay?

So think about it and think how you would do this.

This is a very interesting topic.

Now limited discrepancy searches on beautiful application is scheduling.

We'll come back to that and rounding as well.

So it's a very interesting technique when you have

a good heuristic, when you know what you're doing.

Okay? So,

let me conclude here.

So, we have seen search strategies here

and we have two class time about relaxation.

Okay?

The big idea last time was relaxation. The big idea today is search, okay.

Now how do you choose between them, okay.

This is a very interesting topic in optimization.

This is one of the most important topic.

You can find really strong relaxation, but they will be so slow that

you will explore a very small tree, but it take a long time.

Or you can have a relaxation, which is really strong.

The trees and a really fast computer.

And, the trees the size of your search base substantially.

So, you have to find, you know the right relaxation and

the right part of the search tree that you will explore.

This is a very interesting topic. Okay?

Most of this is going to be problem specific like everything in optimization.

Okay?

And finally, you know, there are many other strategies.

And, you know, I really encourage you to actually think outside the box, and

think, if you can find all the kind of strategies that are going to be good.

Okay? Think about it, okay?

So, at this point, this is very interesting.

We have covered, you know the knapsack problem entirely.

Okay.

It's time for you to start the assignments, okay, and we'll

go into more sophisticated techniques in the next couple of lectures.

Thank you very much guys.

Okay, so it's time to actually go to work and actually solve these assignments.

Bye.