0:00

Okay, so this is Discrete Optimization: Constraint Programming, okay.

so this is about global constraints. So, what we're going to do today is

essentially give you the basic intuition on how to implement global constraints.

And we'll consider two of them, Knapsack and All-different.

Now, you have to understand that this is a huge area of research.

There are about you know, more than 100 global constraints which have been

defined so far. So this is only just touching upon the

topic, but you'll get a sense of what it means to implement global constraints in

a constraint programming system. Okay, so the golden standard for Pruning,

this is the most important thing that we need to cover first is that, after the

pruning, what we want to, what we want to have is that for every value in the

domain of the variable x, that is left in the domain.

So that value is still in the domain of variable x, we want to be sure that we

can find values in the domain of the other variables in the constraint such

that there is a feasible solution for that constraint.

Okay? So in a sense, you know that every value

for every variable you know, is part of a solution to that constraint, so that's

the golden standard. Now, historically, you know, there are

many names for that particular, for that particular golden standard, arc

consistency, domain consistency, don't worry about those.

We know that we have this golden standard that we want to achieve.

Okay? In a sense that golden standard is, is

basically capturing what it means to prune optimally, if you only have the

domains. Okay?

So there is, if you have only the domains and the constraint, there is no way you

can actually have a better pruning. So, this is the notion of optimality that

we have here. of course in practice, it may not be you

know, possible to implement this global standard in polynomial time.

So if that's the case, we'd have to relax the standards and you know, do incomplete

pruning, or we left to you know, settle for exponential time.

Okay? So, let me start with the binary

knapsack, which is an example of that we are seeing, okay, so the constraint we

have, you see it over there. We have a sum of the wk xk, xk are

decision variables, taking value between 0 and 1, and we want essentially this big

sum to be between L and U, okay? And so this is an example that I'm

going to show you here, okay? And we're going to use later on in, in

the lecture, you see the sum of the variable X1, and X2, and so on.

And they have to be between 10 and 12. Okay.

And so the first thing that we have to do right, because there is two things that a

constraint is to do, is feasibility and then pruning.

So the first thing is to find out if this constraint is feasible, and then we get,

we have to see if we can remove some of the values of the domain of the

variables, okay? Now to find out the, if this feasible,

this is reasonably easy. I'm sure you can see it yourself.

You give a one to these, you know, x2, x3 and x4, and you have a value of 12.

This is between the bounds, so you are happy these constraints is feasible,

okay? Now, the more interesting question here

is that, can you actually remove some values from the domain of these

variables? In other words, is there a variable which

cannot take the value zero? Or is there a variable which cannot take

the value one? Okay?

So, think about it, okay? Can you see it?

There is something to see here, okay? So, if you don't see it, don't worry, the

algorithm is going to show it exactly what can be pruned in this particular

example. Okay, so the, the, the basic algorithm

that we're going to use for feasibility is based on dynamic programming, is

going to pseudo-polynomial times. So if the numbers are small is basically

polynomial time. If the numbers are big, s, you know, then

the, the algorithm is going to take more time.

the pruning algorithm is interesting, because essentially what it's doing, is

using the dynamic programming table to prune the search, to pru, to prune the

domain of the variables, okay? And so, the way to think about it, is

we'll have two phases. The forward phase is going to compute

essentially feasibility and its dependency graph.

You know, what, you know, how do we, get to those values?

Okay? And then we'd have a backward phases,

which basically removes some of the things that we don't need.

And then is going to basically show it, show it to you what values we have to

prune, is going to be beautiful. You know, you'd see the pictures and then

you'd see essentially what you can prune and what you cannot prune.

So, in a sense when we talk about the algorithm here, essentially, it will

basically combine feasibility and pruning in one algorithm.

Okay? So this is, this is, this is the picture

we're going to use. So, what we see here are the values from

0 to 12, okay? That's all the values that we will

consider. Okay?

And these big circles that you see there, basically we are going to see if we can

actually get there. Okay?

So, initially what you have is that you have the set of all the values that you

can get without actually picking any of the variables, with looking at the

variables. And these are basically the values that

you can reach by taking your value for x1 and then you will do the same for x2, x3,

and x3. Okay?

And then essentially at the end, you will see which values you can actually reach.

Okay? So that's what we're going to do.

Okay? So when we start of course, you know, if

we don't use x1 or x2 or x3 or x4, the only value that we can get is zero.

Okay? So we start from 0, that is the only

value that we have at the beginning. Then we look at x1, and of course we can

decide to take x1 or not to take x1, right?

So when we if we, if we take x1 okay, so the value of x squared is 2.

So it's so we can reach 2, and if you don't take x1 of course, we stay at 0 and

that's what you see there. You see these two arrows.

Okay? So one is going up.

Okay? That's the arrow when we pick up x1, when

x1 is equal to one, and we get reach re reach 2, in other words we still have

zero. And you see that we use two different

shape for these arrows, one is you know, a dash line, the other one is a solid

line. And you'll see why this is important

later on. Okay?

Now, we do the same for x2 and we can reach five values now, zero, two, three

and five. Or four values you know, zero, two, three

and five. And then we do the same thing for x3.

And then we reach many more values. And then we do the same for x4.

And these are all the values that we can reach, all the combinations of zero and

one. For x1, x2, x3, and x4, and that gives us

the value which are basically between 0 and 12, except one.

We can reach everything. Okay?

So, that's basically the end of the forward phase.

At this point, we have all these paths, which correspond to staking you know,

some values for the variables and giving us a value for the sum at the end.

Okay? Now the backward phase is going to, is

going to start. And one of the things that we know is

that this constraint to be feasible, we have to be between essentially 10 and 12.

Okay? So all the other values which are at the

bottom here, we really don't care. So we can get rid of them.

Okay? And we can get rid of the, the, the nice

arrows that were produced you know, because they are basically useless for

us. They are not giving us feasible

solutions. Okay?

No, we did that for x4, we do that for x for x3 now, and then we do that for x2,

and then now we have this beautiful graph.

Okay? Where essentially these are describing

the path that you can take for the various variables and therefore the

combination of zero, and one for the various variable to arrive to a feasible

solution. Okay?

So we clean up the mess you know, such that we have only blue values for the

the, the, the various unit transition states where for the various variables.

And we've this beautiful things here, where we see some arrows going you know,

some dash arrows, and some you know, solid arrows that in, in this particular

graph. Okay?

Now there is some thing beautiful to see now.

Okay? So if you look at the variable like lets

say x2 over there, you can see that there are two you know, solid lines and one

dash line. What that basically means, is that, that

variable can take value zero and the value one.

Okay? Because the value, the dash line is value

zero, the solid lines are value ones. Okay?

Now, when you look at x4, that's very interesting.

The only thing you see for x4 are basically solid lines.

And you know, that basically means that value x4 cannot take the value zero and,

and, and, and produce a feasible solution.

And that is essential tells us that the values that the value 0 can be pruned

from the domain of X4 and therefore X4 has to be one.

If for one particle of variables all the, all the lines were dashed lines, you

would know that that particular value should be, should be, that, that variable

can take is 0 and not the value one. Okay?

So now at this point, you know everything.

Okay? You know how to prune and you know how to

check feasibility. Feasibility is easy, you compute forward

phrases, you see if you get into the feasible region.

If you can't get into the feasible region, you are not feasible.

And then essentially at the end, you prune, use the backwards phase or these

dependencies. And then you look at every variable and

see what kind of arrows you have, and that basically decides whether you can

prune the value or not. Beautiful and simple, right?

10:14

And so, so in a sense we're going to have two kinds of notes.

We're going to have the notes for the valuables and the notes for the values.

Okay? So, these are the notes for the values,

you see value 1, value 2, value 3, and so on.

You see the set, and then essentially, if a variable can take a value, we are

going to draw a match between the variables and the values.

So, when you see x1 over there, x1 can take the value 1, x1 can take the value

2. Okay?

Now, you look, we look at val, variable two.

You know, variable two can take value two and three, so we going to put an edge

from x2 to 2 and an edge from x2 to 3. And we do that for all the variables.

And then you get this beautiful graph, you know?

Where there is an edge between the variable and the value, if essentially

the value is in the domain of the variable.

Okay? Note so far we hav you know we haven't

done very much, except doing a very nice picture.

Okay? And once again you know, with this

representation, can you see that x4 cannot take the value 2.

Okay? Now, so the first thing that we have to

do is to find out if this constraint is feasible.

Okay? That's always the first step.

You know, we have the constraints, we have a set of domain for the variable,

can we find a feasible solution. Okay?

So, what you see here, this is a feasible solution.

Okay? So you see that the variable x1 here is

assigned a value of 1, you see that the variable x2 is assigned to the value 2

and so on. What is nice here, is that you see these

blue arrow representing the solutions. It's okay, blue arrows.

This is, we're going to do that over and over again in this lecture.

Okay? But essentially what we want is that

every one of the variables on the top, you know, are assigned a blue edge.

Okay? That means that they have assigned a

value. And when you look at the values at the

bottom, we want at most one of the blue arrows to come to them.

Because if there would be two arrows, that basically means that two variables

are assigned to the same value and therefore they are not different, right?

Okay? So that's basically the idea.

Okay? So it's going to be feasible if you have

an arrow going from the top to the bottom and there is one going from, one you

know, leaving every variable and at most one arriving tt every one of the values.

Okay? So this is a feasible solution for

instance. Okay?

So what we did here was just creating a bipartite graph.

And aloso a bipartite graph is a graph with two types of vertices, and edges are

between vertices of different types. Okay?

In the particular case of the all-different constraints, you have one

type of vertices, the, the top ones which are the variables.

And then we have the bottom ones which are the values and there is an edge

between the variable, and the value if the value is in the domain of the

variable. That's what we have done so far.

Okay? So, in a sense you see that this is the

first you know, set in the by product graph, that's the variable, you see the

set of variables at the bottom. Okay?

And once again you have an edge between the variable and value If that value is

in the domain of the variable. Okay?

Now, what we just did when I show you a solution was finding a matching.

What is a matching? That's a basic concept in graph theory.

And, if you have a graph, you know, with a set of vertices V, and a set of edges

E, okay? So, matching is going to be a set of

edge, E such that No two edge in E are sharing a vertex.

Okay? So if you select an edge, you can never

select another edge which is basically using one of the vertices of that edge.

Okay? And then the maximum matching is going to

be the matching where you have the large number, where you have selected the

largest number of edges inside E. Okay?

And so the constraint of course is as soon as you select an edge, you select

two vertices and these vertices can not be selected any other edges.

So you have to select these edges carefully.

Okay? And so why is this interesting?

Because essentially feasibility is the, the basically feasibility amounts to

finding the maximum matching. If you find the maximum matching there

are two cases right? One of the cases is that you know,the,

the maximum matching is a size which is, you know, equal to the number of

variables. That basically means that all the

variables have been assigned. And by definition of the matching they

are assigned a different value. Okay?

And you know at that point that the constraint is feasible.

If the maximum matching is smaller than the number of variables, okay, that

basically means that there is no feasible solution.

There is one or more variables that I cannot assign.

I would have to assign a value which is already taken by another variable.

Okay? So, in a sense, what I'm telling you is,

feasibility here equals matching, okay? So, so now we have, basically, this is an

example. Okay?

So you see, you know, once again, the variable, the value at the top.

You see these blue edges, beautiful. There is a, there is one edge leaving

every variable. You know, most one edge arriving at every

values and know that this constraint is feasible.

This is another example, okay? So you see this is this is essentially,

once again the valuable, the value, this is a different set of domains here.

Okay? And I can try to find the maximum

matching. And this is what I get.

At this point, what you see is this matching is only five edges, and I have

six variable. So, you know, that the constraint isn't

feasible. Okay?

So when you look at the variable X3 over there, you can see that I can't match any

of these edges. I can use, I cannot use any of these

edges because the values there are already taken.

Okay? So essentially the maximum matching here

is of size five, I have six variable, the constraint is not feasible.

Okay, so essentially now I know that if I can find a maximum matching, I will be

able to decide if the constraint is feasible or not.

Okay? So how do I find this beautiful matching?

Okay? So I start with any matching, and then

I'm going to try to improve that. Okay?

So and when no improvement is, is going to be possible, that basically

means that I have a maximum matching. So it's a simple improvement algorithm.

I start from any matching. Maybe a matching with nothing in it, and

then I improve it. And when I'm stuck, I can't improve it, I

know that I have a feasible matching. Of course, I haven't defined what it

means to improve a matching, but I'm going to do that in the next slide.

Okay? So, here, this is a, this is a matching.

Okay? So you see that the variable x3, x4, x5,

and x6 are basically matched. The variable x1 and x2 are not.

Okay? So what I want to do now is improve this

matching. Okay?

So I can do, I can essentially, we place the edges x4, which is assigned to 2, and

the edge x2 to 2. Okay?

I can replace the x, sorry, I can replace the, the edge x4 to 2 by the x, the edges

X2 to 2 and X4 to 4. Okay.

If I do that, I remove one edge, and I have two new edges.

Okay? And what I get is a better matching.

Now five of the variables are matched. Okay.

I can keep on improving this matching. Okay?

I can essentially replace the the edge x3 as assigned to 1 by three new edges.

Okay? actually I also ought to remove x5 to 3

and I replace them by (x1,1), (X3,3) and (x5, 5).

Okay? So I remove two edges and I replace them

by three new ones. And then I get this beautiful matching

[UNKNOWN], where essentially all the variables are matched.

I have a maximum matching, and I will know that this constraint is feasible.

So what I basically did was always selecting one edge and replacing by two,

or selecting two edges and replacing by three.

And that's how I improve the matching. Okay?

Now, this is not precise enough, and I'm going to go into more details now.

Okay? So once again, what we're going to do,

start with a matching, improve the matching.

How do we improve a matching? Okay?

So what we will do is find a free vertex, and that will be in the particular case

of the all-different, finding a variable which is not assigned a value.

Okay? And now there are two cases.

Okay? The first case is, is the case where I

can find a value, which is unmatched. That basically means that that value is

free, no other variable is assigned to that value.

If I can do that, obviously I can improve the matching.

Okay? And I can put that edge in the matching,

and then start again trying to continue improving that matching.

Now, the more interesting case is when that variable you know, cannot be

assigned to any other values at this point because all these values are taken.

Okay? So what I do in that case is that I take

that value, I assign it to the particular variable.

Now I take the variable which is currently assigned to that value, I

remove the edge between them, and then I start again with that variable.

Okay? And I try to find you know, an assignment

for that variable to a value in the same way as I did for the origin of free

variable. Okay?

Let me illustrate that. Okay?

So we've once again this beautiful matching, we've only you know you know,

variables assigned. Okay?

I'm trying to see how I can improve it. So, I select the variable x2 which is

free, and the value 2. Okay?

So, this is what you see there. You see the variable x2 is free.

No, no, no value, it has no value assigned to it.

I take the value 2, but the value two you can see is assigned to the var, to the

variable x4. Therefore, what I have to do is

essentially remove the edge between x4 and 2.

Okay? And start again with the variable x4.

Okay? So I remove x2 to 2.

Okay? And then I consider variable x4 at this

point. Okay?

And, so I start again, so you see the variable x4 now is free.

Okay? And I look, if I can find a value for

that particular variable. Okay?

Such that I can improve the matching. Okay?

What is nice in this particular case, is that I can assign the value X4 to the

value 4. Okay?

And therefore I even improve matching. So, what I did was essentially adding two

edges, one from x2 to x to 2, and one from x4 to 4.

And I remove only one edge, the edge x4 to 2, which is basic, which was in the

matching before. So I remove one edge from the matching, I

have added two new edge which were not in the matching.

So I improve my matching. Okay?

So, that's the basic idea. So essentially what I'm using here is

what is called an alternating path. Okay?

So, if I have a matching, an alternating path from a free vertex in x, you know

the top set of, the top of vertices, to a vi, to a vi, to a vertex in v okay, as a

set of value. It's going to be a path, you know,

essentially, from, from this variable x to the variable v, where the edge are

going to alternate between being not in the matching and being in the matching.

So I'm going to start from x, take an edge which is not in the matching, go

back to the top of an edge, which is in the matching, and go to the bottom with

an edge, which is not in the matching, and so on.

Okay? Until I fin, I can assign that particle

of variable to a value which is not taken by any other vertices.

Okay? Now, there is an odd number of edges

here, why? Because I start from the top and I end up

at the bottom. Okay?

So to go from the top to the bottom, you know I need one edge.

Every time I go up and down, this is an even number of edges.

So we're basically I will have an odd number of edges.

And this is really important, right? Because if I have an odd number of edges,

and I always start by taking something which is not in the matching, I know that

I will increase you know, the number of edge inside the matching by one.

Okay? So I'm improving essentially the

matching. Okay?

I remove let's say k edge region area inside the matching and I add, k mine, k

plus 1 edges, which are not in the match. Really good.

Okay? So these alternating paths essentially,

are allowing me to improve the matching. Okay?

So, how do I find this alternating path? You can see I'm basically moving step by

step, refining this concept one at a time.

And this is the final step, okay? So, I'm basically telling you what the

algorithm is. You have to create a directed graph.

Okay? given a particular matching.

Okay? Now, the edge of the matching are

going to be oriented top to, from the bottom to the top.

Okay? So these edges are going to allow me to

go to the top. And the edges which are not in the

matching are going to be oriented from the top to the bottom.

I'm basically, we'll be able to assign a value to a particular variable.

Okay? So this is essentially the illustration

for the examples that you've seen. So, you see these blue edges now, you

know, they are oriented to the top. Okay?

And the other edges, the edges which are not in the matching, are oriented from

the top to the bottom. Now I have a derivative graph, right?

And remember, what I want to do is to find a path from a free vertex to another

free vertex, on the top, to a free vertex at the bottom.

So when I'm going to go down, I have to take an edge which is going down and

those edges are basically not in the matching.

And then I can only go up with up with edges which are in the matching.

So I'm going to down, up, down, up. You know?

Not in the matching, in the matching, not in the matching, in the matching.

And now I can follow the direction of these edges.

Okay? So essentially, this is, this is what I

have done. Okay?

So I create a directed graph, I take the edges which are in the matching, I

re-orient it from the bottom to the top. Okay?

And the other edges are oriented from the top to the bottom.

And now essentially an alternating path ,is starting at the top and following

these oriented edges until it reaches a free vertex at the bottom.

Okay? I start at the top in a free vertex.

I'm following these directed edges until I reach a free vertex at the bottom, and

then I'm happy. Okay?

I can find an alternating path, and therefore I can improve my matching.

Okay? so look at this.

Okay? So this again a beautiful graph oriented,

and then I'm basically highlighting the vertex which are free at this point.

You can see that x1 and x2 are free at the level of the values you have 4, 5 and

7, which are free. Okay?

So what do I do? I start from something which is free and

then I try to see, to follow a path until I get to something which is free at the

bottom. Okay?

So I, so this is one of these beautiful path.

Right? So you start from x2, you go to value 3

and then you go up to variable x5 and then you go down to to value 4.

Okay? So once again it's nice, you know, you

start at the top, you, you go to the bottom, with and edge which is not in the

matching, take an edge which is in the matching to go back up, and then go down

again. Okay?

And though, I can't replace these things by the edges which are in the matching,

are going up, the edges are not in the matching are going down.

Okay. And now I have one more variable which

has been assigned, and I have only one free variable here which is the little

x1, which is left alone. Okay.

So now what I do, is I look at this guy, I try to find another an alternating path

for x1. And you're going to see it's a beautiful

path juju going up and down, until it reaches the value 5.

Okay? Once again, you know the edges, the edges

which were going down are going to go up, the edges which were going up are going

to go down and this is my graph at the end.

Okay? And now x1 has been assigned, all the

values at the top now are assigned and almost all the values are assigned.

But we don't really care about the values, because we really care about the

variables and, and be sure they are assigned different values.

Okay? And at this point of course, I have a

feasible solution and this is the matching, essentially that I have.

Okay? So this is essentially what I do.

Okay? So for finding the alternating path.

Okay? I'm basically going from top to the

bottom and until I get to a free vertex at the bottom, which is a free values.

And essentially what I do is I do that interactively.

I take a variable, [SOUND] I look for the spot.

If I can improve, great, I have a better matching.

I take another one, [SOUND] I do the same thing.

Okay? How do I find this path?

I can use traditional depth-first search or best-first search to do that, and I

get a very low polynomial algorithm to do that.

Okay? So this is beautiful because now,

essentially I know how to find a maximum matching, it's a very simple thing.

Okay? You just have to re-orient the graph, you

have just to construct this bipartite graph, or enter it in the right way, and

search for the, for the, for the alternating path inside this directed

graph. So, to summarize, what we do is we build

this bipartite graph, we have a vertex set for the variable on top.

Okay? We have a set of, we have another set of

variable, set of vertices for the values of the bottom, and we have an edge

between a variable and a value if that value's in the domain of the variable.

Okay? Then, essentially, the feasibility.

Okay? So the constraints is going to be

feasible. If we have a maximum matching, which

whose values is the same as the number of variables.

And to do that, the only thing that we have to do is find this alternating path,

which are going to improve the matching one step at a time, until we get the

maximum matching. And when we have it, we just compare it

to the number of variables. Okay?

So we have done part one, which is essentially finding feasibility.

Okay? Now, we have to prune now.

What does it mean to prune is that, if a variable x can take a value v, we have to

see if that particular variable, you know, belongs to a solution.

That means that we can find values for the domain of the other variables.

Okay? Such that, you know, all the variables

are given different values. Okay?

Now, if we translate that in the context of the matching, what it means is that

this edge belong to some matching. Okay?

So there is a very easy algorithm and very naive algorithm to do that.

You basically force this edge in the matching.

Okay? So if we have, we're looking at x,

variable x and value v, we're basically removing all the other parts that have

value for variable x. So if w is a value, remove the edge,

there is only one edge which is left for x.

So if we're finding a maximum matching you know, x will be forced to take this

value v. And then we look if we can you know, find

a maximum match whose values is essentially the number of variables.

If we can do that, we are fine. Now, the problem with this of course, is

that we have to take all these edges and then find a maximum matching.

Take another edge, find another matching, a matching, so it's really tiring.

It takes a lot of time. Okay?

So we can do better. And there is this basic property, crazy

property by Berge, in 1970, which was found by [UNKNOWN].

And so what is interesting here is that this property seems to come out of no

where. Okay?

But I'm going to read it to you and, and tell you why it's relevant.

Okay? Basically this property is basically

saying that an edge belong to some, but not all, okay, some, not all, okay,

matching. If given a maximum matching, it belongs

to other two things. Okay?

An even alternating cycle. Okay?

So that's a cycle which has length or or length which is even.

Okay? Or an even alternating path which starts

at a free vertex. Okay?

So it's crazy, right? This property is crazy.

First, you know, we are looking at some but not all the matching, and then we

have these two, you know, crazy properties.

But why is it so relevant to us? Well, you know, the edges in the

matching, we really don't care about that, right?

Because we know they are in the matching, so we know we can't remove them from the

domain of the variables. So what we are really looking at here,

are edges which are not in the matching. So we know that in the best case, they

belong to some matching but they won't be long to all.

And this is what really this property is about.

So this is like magic, right? So this is just the property that we

need. So the only thing that we have to do, is

for all these edges which we are not in matching, we have to look at two things.

Do they belong to even alternating cycle, or do they belong to an even alternating

path which starts with a free vertex? Okay?

So essentially this property is telling us really what we want to do, and it

basically boils down to looking at these two things.

Okay? Which is much easier essentially, than

looking at all of the things we had to do before.

So what are the free vertices in this particular case?

The free vertices are of, where are they? Are they the variables or the values?

Well, we know that the constraint is feasible, so all the variables are

matched. So the only you know, free vertices are

really the values. And that's where we're going to start,

essentially, when we search for this even alternating path.

Okay? So essentially what we're going to do is

the same thing as is for feasibility, except that it's the opposite.

Okay? So we create a directed graph, but this

time around the edges are oriented in the different way.

Why, because before we were looking to assign the variables.

Here we are going to look at alternating pump that are starting at the three

vertices, and these guys are the values now.

Okay? So we start at the bottom and then we can

go up, if the edges don't belong to the matching.

And, and edges belong to the matching go from top to the bottom.

Okay? So this is, once again, an illustration.

It's essentially the same thing as we have seen before, except the edges are

the other re-direction. So you see this guy here, this seven,

this is a free variable. Okay?

This is a free value. Okay?

And then it can go up you know, to some of the variables and then the edges in

the matching, the blue edges are going from top to the bottom down.