0:00

Okay discrete optimization, but I can tabu search.

Â so I want to spend a little bit of time on tabu search because this is very

Â interesting and I want to, I want to tell you a little bit of a story before.

Â So, so, John Hooker once told me that, for every invention, there are very,

Â there are three stages. Okay, so the first stage is that people

Â are going to look at the invention, and say oh, this can't work, okay, this is

Â all wrong, okay? And so that's the first stage and then

Â you go to the second stage where people realize that this is actually working,

Â but then would say, oh, but this is trivial, okay?

Â So, that's the second phase of an invention, people will say it's trivial.

Â And of course that lasts for a while and then the third stage is the most

Â interesting one. People will say, oh, that invention, I

Â invented it. Okay, i'ts a footnote in my thesis, okay?

Â So tabu searches like that. Okay, so amazing thing, you know it was

Â invented by you know Fred Glover. And the first time you look at this, you

Â say, well this can't work right? So and then could and you see on your

Â favorite problem these reserves that are coming up and are amazing most of the

Â time right? So you say, wow this actually very, you

Â know this is actually working. But it's so simple it's not even

Â interesting right? And then afterwards, you spend your life

Â working on tabu search. Right?

Â So, this is, this is what tabu search can do to you.

Â Okay? So, so I want to spend some time in

Â telling you some of the richness of this particular meta heuristic.

Â And, and, and all the issues that you have in trying to design the right

Â compromise between different aspect of, of tabu search algorithm.

Â So remember what we wanted to do is that we start from a point, we go back down to

Â a local minimum. And then sometimes we go back up, and

Â then, and we want, we have the ability to go back up and then to go down.

Â If we don't have anything, typically you, you can't escape this local minimum, you

Â would stay here all the time. Okay.

Â So the idea of tabu search is that you know, one, once you have been to, to a

Â particular state you consider it tabu. Okay?

Â So you have seen it, you don't want to go back there.

Â Okay? And then you can start going up to a

Â particular node like you know, this one. And obviously when you are there you, you

Â know, if you have something like a you know, greedy approach.

Â This guy may be in the neighborhood and then you will want to go there.

Â Okay? But since it's tabu you can't go there,

Â so you will have to find the next best. If you have let's say a greedy approach.

Â Okay? And this one may be the next best, and

Â so, so you move up there. And once again, when you are up there,

Â this guy will say, ooh, I want to be greedy.

Â And he's going to look at this one, and of course the first one, and say, ooh,

Â that's where I want to go. But once again, you can't do that because

Â you are tabu and so you go up. And now you are the top of this, top of

Â this landscape, and you can actually start going down.

Â To to the ultimate solution, okay. So in a sense, the key abstract idea of

Â tabu search is that you keyboard knows that you visited, okay, and you never go

Â back there. Okay, so you have been there, seen them,

Â no way you want to return to them, okay. So this is the pres, you know, the tabu

Â search algorithm that I've shown you. The key aspect here is that you always

Â skip this tabu list of all, essentially, the nodes that you have visited so far.

Â This tool is that you increase every time you move to another, another, another

Â state, okay? Now, one of the basic issues with the

Â tabu-search algorithm is that, you know, if, if, you, you want to select the best

Â configuration that is not tabu. Okay?

Â So that's what we have seen. But one of the basic issues is that it's

Â very expensive to maintain all these states that you been there.

Â Right? So assume that you have, you know,

Â thousands of variables. You would have to keep them, and you may

Â visit, you know. So, as I've shown you last time you know,

Â 100's of 1000's of configuration. So you start using a lot of space and

Â every time you have to try to, every time you select a neighbor, you have to say,

Â oh, is it legal or not. So you have to compare it with all the

Â stuff that you have varied. That you have seen.

Â So it's very, you know, expensive from, let's say, a time and a space, you know,

Â standpoint. So what you can do is to use what is

Â called a Short-Term Memory. Instead of actually looking, keeping all

Â these notes, you are going to only look at certain, you can only keep a subset of

Â them. And essentially, the recent ones.

Â You are looking at a subset of these two lists, okay?

Â And so you, you look at the suffix of that list, which already knows that you

Â have visited recently. And what you're basically saying, I don't

Â want to go to the stuff that I've been recently, okay?

Â So, because you have, you know, you, you can expect that, you know, the nodes that

Â you haven't seen for a while, they are very far away.

Â They are not inside your neighborhood, so you don't care about that.

Â Okay? So if you keep, you know, if you keep a

Â fixed length list, tabu list, okay, so you basically, you know, keep the last

Â 20, who knows if you have something like this, okay.

Â Of course, in practice, you can dynamically increase or decrease this

Â list to get a good compromise in not being stuck in a local minimum but not

Â preventing, you know, not, not preventing, you know, you to actually

Â diversify this this, the search too much. So you can decrease, for instance, the

Â size of the list if the, the objective functions is, is degrading, or, you know,

Â increasing it when the, when you actually improving the objective all the time.

Â So you can use things like this, okay? So, so, the, the question that you have

Â is that at that point you are still storing entire consideration.

Â Okay? And that means still be too costly.

Â Sometimes it's not. You can actually implement things like

Â this. But sometimes you, you are going to say,

Â wow, this is really costly. I have to store this entire state, I have

Â to do all this comparison that's, you know, taking a lot of time.

Â And my sense is that I want to, you know, valid, many, many configurations very,

Â very quickly. So, so, what you can do is, is instead

Â of, you know, storing these states, you are going to store an abstraction of this

Â suffix, okay. And the way show this abstraction can be

Â arbitrarily complicated or arbitrarily simple.

Â But you have to find a way of very concisely abstracting these sets of nodes

Â that you have visited recently. Okay?

Â Now, many possibilities, and I'm going to show you some of them.

Â Okay? So, this is the car sequencing

Â applications that I've shown you before. Okay, once again, you have this assembly

Â line. You can swap, you know, different

Â different slots of the assembly line. And this is a, this is a greedy local

Â improvement algorithm, and what you do is you select essentially a particular slot,

Â which is, which has violation. You are violating some capacity

Â constraint. You select another slot, that's a

Â quadratic neighborhood, okay. And then you are basically saying, oh, I

Â want to take the best of these moves, okay.

Â And so you apply the move and it's improved the, the value of the object if

Â you take it. Otherwise, you, you know, you, you're in

Â a local minimum, so you break the search. Okay.

Â So this is a greedy algorithm, okay. And so, what I want to do here is

Â implement a tabu search algorithm. But I wan, I don't want to keep this

Â configuration as I told you. We can you know, we can explore hundreds

Â of thousands of them and then maybe you know, 300 you know, slots you know, they,

Â they maybe come 300 slots or more. So this is a lot of space and a lot of

Â time that you would spend just comparing these things.

Â So what we're going to do is that instead of storing these states, we are going to

Â store the transition not the state them self.

Â And the transition in this particular case is what.

Â I'm just swap 2 cars, okay. So you're basically saying okay so a move

Â is swapping these 2 cars, okay ai and aj in the assembly line.

Â And so what I'm going to store inside the tabu-list is these pairs, ai, aj okay.

Â right actually I, I wrote ai, bi okay, so our basically you are remembering that

Â you swap these 2, you know you swap these 2 slots on the assembly line.

Â And the key idea is that you know in the future you don't want to re swap this two

Â because you are going back to place where you have been, okay.

Â So instead of keeping the state you are basically keeping the operation that have

Â been moving you from state to state and in this particular case is basically the

Â swaps, okay. And so configuration is going to be tabu

Â in this particular abstraction if it can be obtained by swapping two things which

Â are inside, inside the, the tabu list. So, so what you're going to do is that

Â you're only going to look at the swaps essentially which are not inside the tabu

Â list. Okay?

Â So you can swap things that you have been swapping recently.

Â That's basically the abstraction here. Okay?

Â So in a sense, you are not storing the states, you are not really, you are not

Â really actually abstracting them in a complete you know, exact fashion.

Â But you are basically remembering some operations that you have done so far.

Â And you don't want to apply these operations again.

Â That's the kind of things that you do here.

Â Okay, so how do we implement this. So, this is the basic idea.

Â So what you do is you keep a counter IT. Which basically denotes the iterations

Â that you are in inside the the algorithm. Okay, so every time you perform a move

Â that counter is going to be incremented. So this is the IT you know counter.

Â Then what you use is a data structure, tabu ij, okay where i and j are the 2.

Â Two slots that you can, can, can swap and that data structure is going to represent

Â the first iteration when it's going to be legal to actually swap i and j again.

Â So, so think about, think about it this way.

Â Okay? So i and j, you have just performed the

Â swap between i and j. Okay?

Â And you want to make sure that you're not going to sub these two guys for a bunch

Â of iterations. So inside tabu ij, you remember, you,

Â okay, you store the next, next iteration, where it will be legal to actually swap

Â these two slots, okay. So, and, and, so, in, in the next two,

Â two conditions here, I'm going to used a fixed tabu size of L.

Â But you could, you know, you could have a dynamic on each side, so the same

Â principle applies. Okay, so, so essentially, when is a move

Â tabu? Okay, so a move is going to be tabu.

Â Is the value tabu i,j so, i,j is going to be tabu if tabu i,j is actually greater

Â the iteration number. Okay.

Â So that means that [UNKNOWN] is only later than I will be allowed to actually

Â swab these 2 guys. If the counter is smaller, that mean you

Â okay can swap it. You know, it's allowed at this point.

Â It's legal at this point to swap these 2 guys.

Â So it's very easy here in constant time if you can find if a move is tabu.

Â Okay. So, and, so when you apply a move, of

Â course, you have to update the tabu status, okay, the tabu data structure of

Â that particular move, okay? So and essentially if you are basically

Â swapping i and j, tabu i and j is going to be the value of the iteration

Â constant. You know counter plus L which is the size

Â of the tabu list. Once again if the tabu list is dymanic

Â you know L is changes over time but this doesn't matter right?

Â So you are basically, basically saying [UNKNOWN] this is it plus L is the next

Â time at which you can actually swap these 2 guys again.

Â Okay. So that's the basic idea.

Â So very simple in the a sense. So you keep track of when you can perform

Â a move. And it's going to tabu, you still cannot

Â perform the move. And when you actually perform the move,

Â you make sure that you remember when is the next time which you can actually

Â apply the move again. Okay?

Â So this is essentially the algorithm. It's basically instrumenting, you know,

Â the, the, the, the greedy local search that I've shown you within this sub-data

Â structure, okay. And, so, so focus on the highlighted part

Â there. And it's actually pretty reasonably

Â simple right. So what you do is you compute the neighor

Â root. You know as usual, and then what you have

Â to do here is you have selection which is basically is basically trying to find out

Â which of these neighor roots are legal. Which of them are not tabu.

Â And they way you do this is you take these pairs and then you test.

Â If the iteration counter is actually greater than the tabu values that is

Â [UNKNOWN] for that particular move, okay? And then the next thing that you have to

Â do is that when you actually perform a particular move you've to make sure that

Â these counters are, these tabu data structure is updated with the right

Â counter. And in this particular way, we are

Â swapping i and j, right? So swapping i and j or swapping j and i

Â is the same thing so we changed basically the, the, the, the, the tabu value of

Â both of these move because they do the same thing, okay?

Â And so, essentially, what, this is what you do.

Â You always take, you know, you always take the, the, the neighborhood filters

Â it that you only can keep the values which are not tabu.

Â And when the, you know? When some of the moves are not tabu.

Â You take the best one, you apply it. You had did this tabu data structure,

Â such that this move becomes the, the, this move this particular move become

Â tab, becomes tabu and then you keep iterating this.

Â Of course, at every step you int, you know, you increment you know, the

Â iteration counters which mean that some of the moves that are currently tabu are

Â not tabu any more. Okay?

Â So that's the basic idea. Very simple filtering and then updating

Â the data structure. Okay?

Â Now what happens if all the moves are tabu?

Â What's going to happen? Are you going to get stuck forever?

Â Well, you have this very nice iteration counter.

Â Right? It's increasing all the time.

Â So if all the moves you know are tabu what is you sit around and you wait a

Â little until the counter has incremented eonugh such that some move is not tabu

Â anymore. Okay, so you don't get stuck essentially

Â you just have to wait, wait a little bit until the next move is not tabu.

Â Okay, now in practice the schematic that I've shown has a lot of things that you

Â can do a lot much better. So typically, you don't perform the

Â swaps. Okay, to find the best ones, you're

Â typically using chromountal data structure sets.

Â You can evaluate these things very quickly.

Â You never simulate the move and then [UNKNOWN] that's what the move is giving

Â me. You always try to say [UNKNOWN] if I

Â would perform that move that would be the value of the objective function.

Â This is called differentiation in sense it allows you to explore many many more

Â moves. And every one of these moves evaluations

Â is much faster. And there are a lot of paper that are

Â describing how you do that for various you know problems it's basically finding

Â the right data structures to evaluate moves very quickly.

Â and this is very important because in general you know tabu search is always

Â somewhat greedy or semi greedy. You always try to find the best or some

Â of the best algorithms the best neighbor to select.

Â Okay? So that's a key aspect here.

Â You are not evaluating all the moves in a systematic fashion.

Â You are basically using differentiation and trying to evaluate them very quickly,

Â not performing the actual move for actually evaluating it.

Â Okay? Now what you, one of the things that you

Â may say the, but, but what is this abstraction?

Â Is it too strong, is it actually stronger than the, than the, than the, the suffix

Â of the tabu list, or is actually weaker? And the answer is it's a bit of both.

Â Right? So this is too weak because it cannot

Â prevent cycling, right? So because you, you don't really keep the

Â nodes, you, you also don't keep, you know, the entire list of states that you

Â have visited, right? So you only consider a suffix, so, so you

Â can still cycle here and go back to a state that you have seen.

Â And then do a number of operation, go back to that state.

Â Do a number operation go back and you cycle.

Â Okay, so, so there is no way you really, your, it's a too weak in that sense.

Â You can go back to state that you have seen before.

Â When you do that, what do you do? Typical you increase you know the size of

Â the tabu list. Okay, but it's too weak in this

Â particular sense. It's also too strong because the only

Â thing that you are remembering here are the moves.

Â They are not really representing the states.

Â Okay? So it may very well be that this move at

Â this particular state, if you would apply them, would give you a state that you

Â haven't seen before. The only thing that you remember is one

Â of the moves that I have done. Okay?

Â But you, you have applied your sequence of move, but when you are in this state

Â applying maybe some of the first move, maybe completely fine.

Â And give you another state. Okay?

Â So that may happen. And so, what you have here is the fact

Â that, in a sense, the tabu list here is too strong.

Â It's actually preventing you from going to states that are actually nice, and

Â where haven't, that you haven't visited before.

Â And who can actually be pretty, and which can actually be pretty good.

Â So, in that sense it's too strong it prevents you it's stronger than you know,

Â keeping the entire list and keeping this you know, so that you don't go back to a

Â state that you seen before okay. It's it can prevent you from going to

Â states that have not been visited yet. Okay so what you can do to [INAUDIBLE]

Â that is a essentially strengthening the abstraction.

Â So instead of just storing the move you can say [UNKNOWN] but I'm going to store

Â more features of these particular states. I can try to refine the way I'm basically

Â capturing this state. So, one of the things that you can do

Â easily, okay, is for instance, capture the objective values, okay?

Â So you can say, okay, think about the, the, you know, the car sequencing

Â algorithm again. You had slot ai and aj that you wanted to

Â slot, to swap and bj, bi that you wanted to swap.

Â What you can do is keep inside the tabu list, this kind of four face, right, so

Â you can, you can store ai. You can store bi but you can also store

Â the value of the objective function where you are now, okay?

Â So this is fi, this particular function there.

Â And you can also store the value that you divided the objective function that you

Â would get if you performed the move. And this is denoted fi plus 1 there,

Â okay, inside the tabu list. Okay.

Â So this is fi plus 1 here. So, in a sense what you are saying is, if

Â I'm in the state with evaluation function is fi and I want to swap ai and aj, and

Â bi. Okay?

Â And that would resort in a state that is the value of the objective function,

Â which is fi plus 1. That move is doubled.

Â Okay? Now, if that gives me a, an objective

Â function which is different from fi plus 1.

Â It's not tabu, because I know that this is not the state, this is not the value

Â that I have performed before. So you can do something like that, so now

Â the definition of the tabu search is, I'm in a particular state.

Â This is the value of my objective function, I would perform this move and I

Â would get to this value of the objective function.

Â Okay? So these are the kind of things that you

Â can start doing. Okay?

Â So strengthening the abstraction. Okay?

Â Now so, so the other things that you know, we can do is using this linear

Â neighborhood. Okay?

Â So this multistage heuristic that I've shown, that I've talked to you.

Â You know, two lectures ago and the key idea is that they would do exactly the

Â same thing in this particular case. You would take this particular car that

Â you want to swap, okay and then you would take all the other, you know, and it has

Â to have some violation. Then you would take all the other slots

Â that you can swap that ca, that car with, okay?

Â And essentially the tabu list would be exactly the same, you would consider

Â pairs i and j. Okay?

Â And you would be making sure to select only the ones that are not tabu.

Â Okay? In this particular case, one of the

Â things that you can do is that the move are not symmetrical anymore necessarily,

Â so you update only one of them, okay? And so basically, putting your tabu

Â search on top of that multi-set heuristic is very easy.

Â It's essentially the same as in the quadratic neighborhood, okay?

Â So let me talk a little bit about the Queens Problem.

Â So you remember the Queens Problem. It's, what we did there was selecting a

Â queen and then moving to a position where you reduce the valuation the most.

Â Okay? So in this particular case, the

Â particular neighbor root is taking a value, no, taking a variable and

Â assigning a new value, taking a queen and moving it to a new position.

Â Okay? So, what has to be the tabu search, and

Â the tabu data structure in this particular case?

Â Well, essentially the move is moving you know a particular queen to a new

Â position, okay. So what, what, you know, if you go to

Â that state, what is the thing that you want to avoid in the next iteration?

Â Well, you want to avoid taking that, that queen and putting it back to initial

Â position, right? So what essentially you are trying to do

Â here is preventing the value of the queen to take its old value, right.

Â And so, once again, what you're going to store here, are pairs, x, v, where x is a

Â variable and v is a value for that variable, okay?

Â So we store pairs, but they have, they have a different meaning here.

Â It's basically, [UNKNOWN] I don't want this variable x to be assigned the value

Â v for a while, okay? And that's what you store inside a tabu

Â list. Okay?

Â So this is essentially the search, once again, for, the queen's example.

Â Okay? And what we do, is, basically, this is,

Â this is, this is a mean conflict heuristic.

Â You basically select a queen which appear in sum violation.

Â Then you generate all the possible position for that particular queen.

Â And you only the select the ones that are not tabu.

Â Which basically means that you even had the, that value for the queen for a

Â while. Okay?

Â And, so. Then, afterwards, you know that, you

Â select the best move. And then you apply this operation here.

Â That's what you see there. Okay?

Â Which basically means [UNKNOWN] I don't want in the future to return to that

Â value for that queen. Okay?

Â So use tabu i indexed to. So, you use the tabu data structure

Â indexed by i. That is the queen you have selected.

Â And then the value of the queen, at this point, which is queen c.

Â Okay so and basically what you do is you basically say oh I don't want to give the

Â value to, to do, for the queen to get the value that it currently has for a number

Â of iterations. Okay and the rest of the procedure is

Â basically the same, okay. So what you keep here is okay, I don't

Â want to debt queen to get back to its original value.

Â Okay? So once again, you know, we have a lot of

Â choices in this tabu search algorithm. What I've shown you here is essentially

Â oh, you know, I take a move, and I, and the abstraction is actually capturing

Â that move. And making sure that we cannot do the

Â reverse move, in a sense, here. Okay?

Â So we take the move, I want to make sure that I don't take the reverse move, for

Â actually getting back to the previous state.

Â Okay? So in the swap, it was easy to move,

Â where itself, the reverse move. Here, I'm basing the capturing the fact

Â that, you know, I'm prevent, I'm putting in the tabu search, the reverse move.

Â Okay? Now in practice, you may actually use

Â abstraction, which are even more coarse, okay.

Â It's really coarser abstraction. So one of the things that I may say, ooh,

Â that variable, you know? I'm just changing it.

Â I don't want to change it again for a bunch of iteration, okay?

Â So instead of keeping the value that I, the, the reverse move.

Â You are basically forbidding all the move that are involved with that variable.

Â So this is something you could do. There is nothing that prevents you from

Â doing that. Here, you are basically having a much,

Â much, much more coarse approximation of the tabu list.

Â But it's maybe very effective in diverse, you know, diversifying the search, okay?

Â So essentially, if you put a variable which is tabu for a number of iteration,

Â you can actually touch that particle variable for a while, okay?

Â Now, so this is, once again, the basic of, of, of essentially keeping this tabu

Â search, tabu list. But one of the things I have told you

Â sometimes too weak or sometimes too strong.

Â So when they asked too strong, you have the behavior that can happen is that you

Â are in this state. And then you want to perform his move and

Â then this move is so good that it will give the best solution ever that you have

Â seen so far right and so you say the move, unfortunately stubble so you are

Â like that that move is stubble with there I really want to go there.

Â Okay. And of course we want to allow it.

Â And so in tabu search you have this kind of you know technique which is called the

Â aspiration criterion. And what you can do is that if the move

Â is tabu but it's so good that you really want to take it, you basically over ride

Â the tabu search, the tabu status. Okay.

Â So essentially what you are, so you can implement some aspiration criterion.

Â That are basically telling, yes, it's tabu but if it, if it's that good, just

Â take it, okay? So, and that's what you see here.

Â And the reason you can do things like this is because, once again, you know,

Â that, to, that, what we keep is an obstruction of the tabu list.

Â We are preventing the search from going to state that are really that we haven't

Â visited so far and there is nothing you can really do.

Â Otherwise, you would have to start the whole thing, right?

Â So in, in this particular case, this is what we implement, the NotTabu.

Â It's going to take all the particular configurations that, all the neighbors

Â that we are considering and you'll see that we'll want to have two conditions.

Â They don't have to be tabu or if they are tabu, the move is so good that it's

Â better, you know better than the best solution so far and therefore you take it

Â anyway. Okay.

Â So in a sense, the neighborhoods are going to be all the move which are

Â NotTabu plus the one that are so good that you've, you've got to take them.

Â Okay, so you get to take them. Okay, so basically that's the idea behind

Â you know, aspiration the aspiration criteria.

Â You take moves that are tabu but they are so good that you know you really want to

Â take them. Okay so 2 more techniques that I want to

Â talk about and this is more long term memory.

Â Okay, so what you can happen to this in this tabu search is that you start from

Â the 1 and then you get good solution. And at some point, you start in, you

Â start going in this kind of long, long walk, where nothing really improves.

Â You improve, you improve. You degrade, you improve.

Â Nothing happens, okay? So you have these long, long walk,

Â nothing happens. But at some point, you were in a really

Â good state. And you thought you would find a very

Â high quality solution. So what you can do is what is called

Â intensification. You keep a set of states that have been

Â very good. You found them before.

Â They are very good. And what you do is instead of going into

Â this long, random work where nothing happens, you decide to go back to these

Â and restart the search from then. Okay?

Â So that's what is called intensification. You keep very high quality solution and

Â when you are stuck in this long, long, long walk where nothing really is

Â improving you know, over the best solutions you have found so far.

Â You return to one of these things and you restart the search from there.

Â Okay? Very useful technique in practice as

Â well. And then, diversification is just the

Â opposite. Okay, so in heuristics, in

Â meta-heuristic, what is nice is that if you have a good idea, generally the

Â inverse of the idea is also nice. Okay?

Â So instead of intensify, you want to diversify.

Â Okay? So what it's basically saying is that if

Â you have not seen this improvement for a long time, you are working in this random

Â work, not seeing anything interesting. What you can say is that wow, I made a

Â configuration where something is really wrong.

Â So I'm going to take part of this solution and completely change it.

Â And that's what is called diversification.

Â You can take, for instance, a bunch of, you know, in the, in the consequencing

Â you would take an arbitrary number of swaps, and chance completely your

Â configuration. Okay?

Â In, in the queens, you can start moving queens around in a completely random

Â fashion, okay? Or you can, you know, use more of the

Â structure of the problem to actually have a more, you know, structure based, you

Â know, diversification. But once again, this is very useful.

Â So a lot of the tabu search algorithm will have an intensification phase.

Â And then they will have also a diversification phase.

Â When the intensification is, you know, exhausted in a a sense.

Â Okay? And then there is this very interesting

Â technique as we also use for the TTP, where it's typically in problems where

Â you have both very difficult constraints and very difficult objective function.

Â Typically what you do is to solve the constraint are becoming in the objective

Â function, okay to drive the search of feasibility.

Â And then you also want to have the object, the original objective of course

Â to find, you know very good solution. And strategic oscillation is kind of

Â scheme where you want to balance the time that you spend in the feasible and the

Â infeasible region. Okay?

Â And so you basically try to adjust the power meter.

Â If you spend too much time in the in feasible region, you want to give more

Â importance to the constraint that I have violated.

Â To increase the weight of these constraints.

Â Such that you go back and you spend more time in the feasible region in the hope

Â of finding feasible solution that increase the best so far.

Â If you spend too much time in the feasible region, you may actually not

Â escape some local minima. So you may decide to, you know, let the,

Â you know, give more weight to the objective function.

Â You may go to the infeasible region with the hope that at some point you get back

Â to feasibility, and a better objective function, okay?

Â So once again these techniques are very useful in practice on some of the tabu

Â search algorithm, okay. So final remark in this particular

Â techniques like diversification, intensification and strategic oscillation

Â are actually very useful in many different settings.

Â They are very useful for simulating, they're needing for many other, for many

Â other you know meta heuristic. So use them in those settings as well

Â because they, they can make a big difference in practice.

Â Okay? So this is basically finishing the local

Â search part of the class. what we're going to do in the next couple

Â of lectures is studying looking at mathematical programming, starting with

Â linear programming, and integer programming.

Â So, very, very different paradigm, but very exciting, too.

Â Thank you.

Â