In the Eastern Sea, there were

two gigantic mulberry trees housing 10 three-legged golden birds.

These birds are capable of transforming into the sun,

and originally took turns to appear in the sky daily.

However, as a consequence of the broken sky,

the schedule of these golden birds was disrupted,

and they appeared as 10 suns together.

The heat from this created unprecedented drought and all plants were dying.

A young man called Hou Yi wanted to save the world from this drought,

and he decided to shoot down nine of the 10 suns.

Having heard about this,

Zhuge Liang went to visit Hou Yi to help.

Hou Yi attempted to shoot the suns down,

but he found the golden birds were too swift for him.

Later on, he was told that to ease the drought temporarily,

he could shoot and crack a huge rock to block the hidden spring behind it.

This would also enable him to practice his shooting skills needed to destroy the suns.

To crack the rock, 30 groups of arrows of 30 different types,

needed to be shot upon the rock in a balanced manner.

Treating the rock as a 30 by 30 matrix,

the arrows shot on each row or column all needed to be different.

Luckily, there was evidence of previous shot attempts left already,

and Hou Yi could utilize this by shooting the suitable arrows in the empty spots.

Although, he still needed help from Zhuge Liang to determine which arrows to shoot.

Zhuge Liang summon the dragon of dimensions for assistance from the professors.

So, Hou Yi has a problem,

there are now 10 suns in the sky because of the disruption of the heavens,

and he's going to need to shoot down nine of them.

In order to do that, he first wants to do some archery training.

And during his archery training,

he is also going to help with some other problems caused by those 10 suns in the sky.

So, he has 30 different kinds of arrows and 30 of each arrow.

And he has this rock in front of him,

which is basically a 30 by 30 grid.

Now, in order to crack the rock,

he needs to satisfy certain constraints.

He needs that one of each of the 30 kind of arrows appears in every column,

and one of each of the 30 different kinds of arrows appears in every row.

And if he manages to fill in

the entire thing so that all those constraints are satisfied,

then the magical properties of the arrows will crack the rock,

and that will release the water that's trapped behind

the rock which is very useful at the moment because we're in drought,

because there are ten suns in the sky.

Unfortunately, some previous not-so-talented archers

have attempted to solve this problem.

And there's a number of arrows stuck in the rock,

but they go to the point where they couldn't work out

how to fill in the rest of the arrows in order to crack the rock.

So, here's the archery problem,

we have 30 groups of 30 different colours,

and a partially filled 30 by 30 matrix.

And the constraints are, that we need it all different in every row,

and all different every column,

and the idea is to complete this 30 by 30 matrix.

So, this is called the Quasigroup Completion Problem.

So, we have our different colours that are in the initial rock, right?

And we use zero for nothing there or either one to N,

if there's already an arrow with numbers one to N. And the decisions of course,

we need to make is which coloured arrow goes into each cell.

And we've just got a flat one dimensional version of that cell list as well.

So, let's look at the constraints.

So, the first constraint just says,

if the start has a colour which is bigger than zero.

So, actually has an arrow in it,

then that cell is already fixed to that start value.

So, we don't actually have to assign for that cell value at all. It's already fixed.

But for every row,

we need that all the colours appearing in that row are different.

And for every column, all the colours appearing that column are different,

and we're just looking for a solution so this is a solve-satisfy problem.

So, searching can solve the problem,

but even with MiniZinc on a problem this size,

using input variable order and minimum value first,

we will never find a solution after 30 minutes.

So, this well-known Quasigroup Completion Problem,

and basically the problem with this problem is that, the search strategy,

whatever search strategy you adopt,

there's what's called the heavy tailed behaviour.

So, let's talk briefly about heavy tails.

So, heavy tail behavior, so,

in the normal distribution like a standard distribution,

then there's a finite mean and variance.

So basically, we can find a mean, and we can find a variance.

But if we look at a heavy tail distribution like this,

we can see basically it tails off, and basically,

the variance gets to be infinite because we keep on growing both directions, and never,

not reaching zero fast enough so that we can actually get a non-finite variance.

So, what that means in practice,

if we look at solving a QCPs.

Here, we're looking at 11 by 11 grids with

30 percent of the values filled in by arrows already.

Then, actually most of these are fairly easy to solve.

So, if we basically count the amount of these grids,

the percentages of these grids that are solved in a number of search steps.

We find that 75 percent of them are solved with less than 30 search steps.

So, we very quickly find a solution for many of these 11 by 11 problems very quickly.

But, if we look at this graph a bit bigger later on,

we'll see that even after a 100,000 search steps,

five percent of them are still unsolved.

So, this is basically our heavy tail.

This thing is not approaching a 100 percent rate very quickly.

It's still far away from the 100 percent even the 100,000.

And so, notice this is a logarithmic scale here.

So, we're not getting close enough to completing all of them.

So, what that means if we are looking at 11 by 11 QCPs,

75 percent of them are solved in 30 backtracks,

and only 95 percent of them are solved in a 100,000 backtracks.

So, if we have done 50 backtracks,

we should really think to ourselves.

Well, we don't look like we're in one of these easy cases which would be,

in most of the cases easy.

Three quarters of the cases we'd finish in 30 backtracks.

We don't look like we're being in one of those,

so maybe we should just restart,

and try a different search.

And hope with a different search,

we can be in an easy case again.

Because, we don't want to keep going if we're in one of these really bad cases,

where it's going to take more than a 100,000 search steps.

So, the point is,

in a backtracking search,

if you make an early wrong decision that really matters.

You could have pushed yourself into

a place where there are no solutions but it's going to

take you a vast amount of search to prove that there's no solutions in that tree.

And so, restarting is a way of conquering this heavy tail behaviour.

So, once we see that we've gone far,

we just going to restart and try again,

and hope we drive ourselves into one of these lucky cases where we solve it very quickly.

Because the main thing is to avoid being stuck in one of

these bad cases where it can take us almost forever to find a solution.

So, it's a very simple idea.

We basically restart from the beginning

every time a certain amount of resources are consumed.

And so, we usually think of this in terms of

search steps but it can be a number of nodes,

or can be time, or can be backtracks,

any kind of measure of how much search we've done.

And of course, if we're going to do this we better search differently in each run.

If we're just go and putting input order and in_domain min or a fixed search strategy,

and if we restart and do the same search,

we're going to drive ourselves exactly the same part of the search tree,

so that's not going to help us.

So, we need some kind of randomisation.

We need something to change in the search,

either have randomisation or we learned something from previous runs

which changes the later runs.

So, the very highly used in-default search strategies inside

CP solvers because they help us get out of these bad choices.

Remember, a bad choice of the top can require

exponential amount of search to prove that there's no solution there.

So, restarts avoid this by basically throwing away those bad choices, and restarting,

hoping that we don't get a similar set of bad choices at the top.

So, there are different ways we can do restart policy.

We can do a constant restart.

So, every time we used up all resources,

we just restart and go to the top.

That can be dangerous because maybe,

in order to find the good solutions,

we actually just need more than L resources.

So, a more typical kind of restart strategy is, a geometric restart.

So, after we use L resources,

we restart but we then,

we set the limit to be alpha times that L. So,

basically every time we restart,

we have more resources to use up,

and so every restart will get more and more resources before we try the next restart.

There's a very powerful restart approach called the Luby sequence.

So, the Luby sequence is the sequence here.

So, we start with one,

and we duplicate that sequence again,

and then we add on the next number,

the next power of two, so that's two.

And you can see here, the next thing we start with the first,

the next Luby sequence we do it twice,

and then, we add on the next power of four.

And then here, we've done this last Luby sequence twice,

and then we add the next power eight.

And it's been shown that this is kind of

universally optimised at least for random algorithms.

So, this is a very powerful restart sequence

because you can see lots and lots of short restarts,

hoping that you're lucky in those ones much more often than you do these long restarts.

And so, this proves that it's not bettered by more

than a constant factor by other kinds of universal policies.

So, just a sidetrack here,

if we're going to solve QCPs in MiniZinc,

one thing we should be aware of is that,

when we write down an alldifferent constraint, say, to a solver,

then the solver chooses how to do the propagation for that propagator.

And two possibilities it could use for all different,

either bounds or domain propagation.

And by default, Gecode picks bounds propagation because that's often the fastest.

It's faster for getting most the propagation we need,

but for these QCP problems,

we really need domain propagation.

Now, we can force, we can tell the solver to use domain propagation

by adding this annotation domain on the alldifferent constraint.

And if we do that, then Gecode we'll be able to solve some QCPs.

In this case, if we just use bounds propagation,

it's going to struggle to solve any kind of QCPs.

Alright. So, Gecode supports our restart-based search,

so presently by command line solver flags.

So, these are the important command lines solver flag,

so we can use to control restart and Gecode.

So, minus restart and then,

we can have the type of the restart.

So, by default, it's none,

so we are not doing restart.

We can have constant restarts,

linear restarts where we just increase,

Luby restarts using that Luby's sequence or geometric and for some of these restarts,

we're going to need some more information.

So the restart-base, this is the base for the geometric restart.

So basically, every time we do

a geometric restart we multiply the number of resources by one point five,

so we get more results every time.

And then, the scale, this is basically the default,

when we do the first restart.

So basically, we're going to do the first restart after 250 backtracks in this case.

And then, depending on which kind of restart type we're doing,

that the next restart number will change.

So, how we do that through the IDE command line flags,

we can do on the command line straight forward.

In the IDE, if we go to the configuration tab,

we can go down to this solver flags box here,

and we can put in things to make restarts happen.

So, we can control this from the IDE as well.

All right, so lets have a look at solving some of these QCP instances.

We've got 40 instances,

they're 30 by 30 grids with a different kinds of partial completion.

So, they're big. And let's have this basic random search.

So basically, we going to fill in the cells.

We're going to pick first fails.

So, we're going to pick a cell which has a least possible value.

That's a fairly fed generic dynamic search strategy

and we're going to fill them with a random value.

And so-called this random value here means whenever we restart,

we're certainly going to pick a different way of doing it.

So, if we compare Luby restarts versus no restarts,

we find that no restarts is better on nine of

40 instances and Luby restarts is better on 27 of the 40 instances.

And actually, both of them timeout on four of the 40 instances.

So clearly, restart is much,

much stronger than just our random search with no restart.

But we can do better than that because failure-based search,

is also a powerful way of improving search in constraint programming solvers.

So, we're talking about one particular failure-based search which is

called domain divided by weighted degree.

So, a degree is the number of constraints the variable was

in and this dom_w_ degree says,

we choose the variable which has the minimum domain size,

the current domain size, of course the smaller the current domain size,

the better a choice it is to make because it's making less commitment,

divided by the sum of the failures of its constraints.

So basically that says,

the more times this variable is involved in failure,

the more important it is to choose it.

So basically, this is saying that a variable that's involved in a lot of failures,

is hard to find a solution for,

so we should concentrate on those variables first.

Alright, so each variable gets this fail count.

So, initially the fail count is just the number of constraints

the variable appears in and that's why it's called degree.

And then, each time a constraint detects failure,

we increment the fail count for all the variables involved.

So basically, we're just keeping track of which variables were involved in failures,

and the variables that are continually involved in lots of

failures get bigger and bigger fail count.

And we're just using this variable with just the minimum

of domain size divided by fail count.

So, why does this dom_w_degree approach work?

So, the idea here, it's going to concentrate on variables that are causing failure.

And so, we can see this on a small artificial example.

So here, we have a MiniZinc model which got fifteen 0-1

variables b, that are easy to solve,

and basically, have to be add up to more than one.

And there are four integers of variables x here,

which have large domains but there's no solution,

so, we're saying alldifferent variables are false,

the first two to be the same.

So, what happens if we do a first fail search here?

So, we do a first fail search,

of course the Boolean, the 0-1

integers are always chosen first because they have a tiny domain of size two,

compared to the other integers which have a domain of size 100 at the beginning.

So, they are picked first,

and of course they're easy to solve, many ways of solving them.

And then, when we're finished with all of those,

then we pick the x variables and

then the x variables fail because we made sure there's no solution to the x variables.

Then, we go back up out of this x search,

and then we try a different b value and then,

we redo the x search again.

Nothing is changed, the search tree for the x variables will be the same every time.

And you can see, it takes a large number of

choices to show that there's no solution in this tree.

What happens if we use dom_w_degree?

So, all we've done is change the variable selection strategy.

So, down the first branch, we probably pick

the Boolean still because nothing is involved in failure

they still look good because they have small domains.

So, we still down the first branch pick the Boolean variables and then,

eventually we get to the x variables.

Now, we search in these x variables,

and this is a search tree.

It takes a large amount of search to prove that they fail,

and so all the x variables get a large failure count.

Now, when we backtrack up here,

we try the other value for b first,

and we'll do the same search tree again but when we backtrack up here,

the next variable choice we make will not be a b because

all the x variables have large failure counts and the b's will have no failure counts on them.

And so basically, in this tree,

we'll just always go into the x search,

concentrate on the x variables,

still takes us a lot of search and find the failure,

and will eventually pop up the tree.

And you can see that this is only 3000 choices to prove failure of this tree

because we're not, like in this case here,

we're basically doing this b search which got nothing to do with

the problem and then redoing the failed x search every time.

So, dom_w_ degree and restarts work even better,

because of course we can collect those failure counts from

previous executions and use those after we restart, to do a better search.

It's basically moving the decisions about the variables which are hard to solve,

up to the top of the tree,

which is going to make our tree smaller.

So, if we look at our QCP instances again,

30 by 30 grids and now,

we're using dom_w_degree plus restarts.

Let's have a look at some statistics.

So, of the 40, the no restart is based on four of them.

Restart with random search strategy and first fail is based on 12.

And when we do dom_w_degree and random value selection is best on 24 out of the 40.

But the more important issue is this,

the timeout behaviour on 15 minutes.

So, if we do restart, we timeout on 15 out of the 40.

When you do restart with randoms,

our first fail plus random variable selection,

we timeout on five.

But if we do dom_w_degree, we timeout on none.

So, overall this combination of a dom_w_ degree and

restart is very robust because basically,

it's using those earlier searches to find which variables are hard to solve and then,

it's concentrating its search on those hard to solve variables.

So, autonomous search is what happens when you apply no,

when you have no search strategy in your MiniZinc model,

then the solver has to choose what to do.

And Gecode, which is our default solver,

uses this dom_w_degree as its default search strategy.

So, there are other autonomous automatic search strategies like impact,

where we record for every time we make a value,

the impact on how much it reduces domains and we tend to choose variables with

maximal impact and the values with minimal impact to

try and find our way to a solution with a small search tree.

And there are more complicated ones like activity,

which is looking at each possible sort of atomic constraint about a variable,

and recording when it's involved in failure.

So, a much more complicated tracking of failure and

basically picking the literal one of these atomic constraints,

which is involved in failure the most.

So, similar to dom_w_degree but a finer grained version.

And that's basically a very common search strategy used in all set solvers.

So, in summary MiniZinc only allows

basic search strategies to be suggested to the solver.

And we want to be able to be more advanced search than that,

it's critical for solving real world problems.

So, we can basically,

this controlling search is a critical thing for helping our CP solver be more efficient.

And we can use restarts to overcome this heavy tailed behaviour,

which is this critical problem that CP has, and in fact,

failure and conflict-driven search is also a very powerful tool not only

for overcoming heavy tailed behaviour but also just for making a better autonomous search.

And so, depth first search is flawed,

because if I make a wrong decision at the top,

then it can take me an exponential amount of work to realise it was wrong.

And restarts plus randomisation overcome this because basically,

we go up and we try a different thing at the top

even though it may not be better, at least it's different.

So, we get out of this case where the heavy tailed case,

where it could be a lot of effort to find that,

that decision was wrong and we can use failure-based search to help us do that

because basically, that's going to concentrate on the hard to solve variables

first and if once we get past the

hardest to solve variables and the rest of the problem is easy,

and that will often allow us to solve problems with much less search.