0:00

Okay. Discrete optimization.

Â Welcome back, guys.

Â I'm going to talk about something phenomenal today.

Â I'm going to talk about Large Neighborhood Search.

Â It's a technique which is amazing, truly amazing, we use it all of the time.

Â Okay? So what is it?

Â So basically what I'm going to show you is that this is

Â a hybridization of local search and

Â constraint programming or local search and MIP.

Â Okay.

Â It's basically combining these two

Â techniques although different kinds of functionalities.

Â But when you put them together

Â you get something which is way larger than the sum of the parts.

Â Okay?

Â So let me give you the first iteration using CP, that's where it originated from.

Â It's also where it's mostly used these days.

Â But what you do is, you start, you start.

Â You combine CP and local search.

Â And you start with the first step which is finding a feasible solution using CP.

Â You know by now that finding, you know

Â CP is very good for finding feasible solution.

Â It takes going

Â through all this to find that. Okay?

Â Then the second step that you do is you select a neighborhood.

Â You take the solution that you got from CP and you select a neighborhood

Â and, and what you're going to see is

Â that this neighborhood is going to be large, okay?

Â Not a tiny neighborhood where you, you know, swap things.

Â And then what you're going to do is you're going to use CP third

Â to actually optimize that neighborhood, finding

Â the best neighbor in that neighborhood.

Â Okay.

Â And then the last part is basically repeating them forever.

Â Okay, as long as you have time, you will do

Â that for improving the quality of the solutions that you have.

Â Okay, so this is the last bit of research,

Â it's a combination of constrained programming and local search.

Â But the local move, a big move, you explore a big

Â search space then you use Constraint Programming to explore it, okay?

Â What is the neighborhood, you're going to tell me?

Â What is this neighborhood?

Â What you're going to do for instance, this is only one example

Â but this is a, this is useful for a variety of applications.

Â So what you're going to do is you're

Â going to look at the solution and fix the set of variables.

Â You believe that these these variables are nice.

Â Okay?

Â So you keep them.

Â And what you do is you, the neighborhood will be finding out

Â values for the remaining variables, the one that you are not fixing.

Â Okay?

Â So in a sense, you find the solution. Fix some of the variables.

Â Okay?

Â Let the other one free and find a better value for these other variables

Â such that you improve the quality of the solution that you have found so far.

Â And then repeat.

Â Find another set of variables that you fix.

Â Okay?

Â And reoptimize that particular neighborhood.

Â Okay? And you iterate these two steps.

Â Okay?

Â Now, how do you choose a subset of variables?

Â Typically, this is problem specific.

Â You will look at the structure.

Â Okay?

Â And, you know, and you will extract something

Â from the structure of the problem that tells

Â you, ooh, I want to keep that particular,

Â particular part fixed and then explore the rest.

Â I'll show you an example

Â later on that is very intuitive.

Â Typically they are properties like, you know,

Â you know, special, you know, special locations or,

Â or temporary locations temporary notions that are

Â going to be very useful finding these, these neighborhoods.

Â Okay?

Â So typically explore the problem structure although in some particular case you

Â can have a completely random neighborhood that's going to behave very well.

Â Okay? It really depends.

Â 2:53

So why do we use LNS? Because two reasons.

Â First, CP is really good for finding feasible

Â solution and for optimizing very small combinatorial space.

Â Okay?

Â So in a sense, you exploited two strengths of CP for finding a high quality solution

Â and you expl-, you exploit large, you know,

Â you exploit, you know local search for scalability.

Â Okay?

Â So in a sense, you exploited two facets

Â of these two techniques and put them together.

Â Of course, you can generalize that to MIP.

Â Okay? You can find

Â a feasible solution using MIP and you can exploit the neighborhood using MIP.

Â That works as well.

Â Okay?

Â Essentially any kind of method you can actually plug in instead of CP or

Â instead of MIP inside, inside of a

Â large neighborhood search is basically technology independent.

Â It's the basic idea of finding a feasible solution, defining the

Â neighborhood, finding a good neighbor in

Â that neighborhood, and repeating the process.

Â Okay?

Â So let me illustrate that with a very interesting example.

Â Okay? So this is from industrial engineering.

Â It's a problem with a robot.

Â Okay?

Â So a robot has to visit a set of

Â locations and do work at every one of these locations.

Â Okay?

Â It has a service time that it has to spend at every location.

Â That's the time that it takes to actually perform the task at that location.

Â It also has a time window for when

Â it can apply this service at that particular location.

Â And what we want to do is

Â also, there is also distance between the various locations.

Â So, time window, service time, distance on how you can travel between the

Â locations, and that distance is actually

Â asymmetric, which has complicated the problem tremendously.

Â And so what you want to do is find a path for

Â the robot to visit all the locations, that's called a Hamiltonian path.

Â Okay?

Â And we have to make sure that we

Â satisfy all the time window constraints, when we can

Â actually do the service at a various location.

Â And we also minimize the total travel distance.

Â These two constraints are actually have a lot of tension between them.

Â And that's what made the problem very difficult.

Â So this how you can moderate

Â using a, the scheduling, the girlfriend-based scheduling.

Â So I don't expect you to understand all the details

Â but let me just give you a high level overview here.

Â You model every activity that the robot

Â has to perform as an activity.

Â And it is about your service time, which is the duration that it takes.

Â We model the robot itself as a utility resource.

Â It's a vehicle here, okay?

Â And we have a transition time made for us here which basically tells

Â us how much time it takes to move from one location to another one.

Â And so the optimization is really minimizing the sum of the transition

Â times for all the location, subject to the time window constraints and

Â the fact that the robot can only be at one particular place in time, okay?

Â So the key aspect that I want to show you here

Â is that this is a very, very simple models, model okay?

Â So it doesn't require a lot of sophistication.

Â It's a very natural modeling of the problem.

Â Now, how do we apply large neighborhood search on this?

Â Well let's look at what the schedule look like.

Â So this is, is every one of these lines here, I know is basically the time window

Â of a particular a particular location that a task that we have to achieve and

Â the blue stuff is essentially when the robot

Â is there performing the particular, the particular service.

Â So.

Â This is one particular solution the only thing that you

Â don't see here is spacial the, the, the spacial information.

Â And so what I'm going to show you

Â is basically how large neighborhood search can works.

Â So look at this, what we going to do is to say, wow, I'm going to fix everything

Â outside this box.

Â So this part of the schedule and that part the schedule.

Â Assume that this is essentially the path of the robot.

Â And what I will do is basically say,

Â Ooh, I want to reoptimize that subsequence, find

Â the best subsequence, you know, from this particular

Â task to that particular task and reoptimize that.

Â Okay?

Â Then I get a better solution.

Â And then I move that window and do that

Â at another part of the schedule and keep repeating that.

Â That's what large neighborhood search is about.

Â Take a sub part of the problem and

Â reoptimize that, keep the rest completely fixed.

Â Okay?

Â So this is very structure based.

Â You are basically looking at which tasks are consecutive inside a schedule.

Â You can do something completely random.

Â Pick a random set of tasks and fix the rest

Â and reoptimize the, the task that you have reoptimized randomly.

Â That's also large neighborhood search.

Â Okay?

Â So you can imagine a variety of neighborhoods and you

Â can reoptimize every one and you can alternate between them.

Â That's what large neighborhood search is about.

Â Okay.

Â So, so let me give you the sense of

Â how beneficial this can be for for some particular problem.

Â So, so as I told you this is a,

Â a industrial engineering problem, a real problem with variables.

Â Now it was solved initially with a branch-and-cut algorithm, very

Â sophisticated algorithm by the Seuss team in, in Berlin, that's

Â one of the top group in mixed integer programming, and

Â the wrote 50 pages of algorithm and theorems and prove all

Â kinds of polyhedral case for solving this, and this is the best solution they found.

Â And for instance, you know, 48 there is not an optimal solution.

Â They can't prove optimality.

Â This problems are so hard that they couldn't prove optimality at that time.

Â Okay?

Â So what we did was applying the very simple model that

Â we have seen, and putting large neighborhood search on top it.

Â And we were looking at how good, you know, how

Â quickly we could find solutions that were beating these numbers.

Â And essentially

Â in five minutes of CPU time and this is like a decade ago right?

Â So we could essentially improve almost all the problems in five minutes and

Â find better, better solution using the

Â very simple, modern, and large neighborhood search.

Â And to actually improve the larger instances

Â we needed about you know ten minutes.

Â Okay?

Â And so this is the kind of value of

Â large neighborhood search, it's a technique that is extremely good.

Â For finding high quality solution

Â quickly and this is evidence of that, a simple case study of that, okay?

Â So, use it.

Â It's really nice for a lot of problems if you want

Â to scale, if you want to find high quality solution quickly, okay?

Â So, use it as I said.

Â This is a very interesting technique and you know, I'll see you next time.

Â Thank you very much.

Â