So, welcome to module two from course three. Jimmy, what's happening in the story? Well, Zhuge Liang, we're still with Shennong. He helped Shennong to nurture a JianMu so that, you know, Shennong will be able to reap the best YaoCao in the shortest time. And after that, Zhuge Liang met up with Hou Yi. For Hou Yi, the biggest problem was, there were too many suns in the sky. Ten of them actually. So Hou Yi had to shoot down a few of them, and Zhuge Liang also helped Hou Yi to solve one of the tough problems of his village as well. Because of the abnormal climate change, many of these hibernating monsters, they were woken up. And then they were roaming around and attacking villages. So Zhuge Liang and Hou Yi joined hands to arrange for armies to protect the villages. Now, in terms of the technical stuff, I think we're still talking about constraint programming solving technology. We are, we are. But this time, in last week we looked at satisfaction problems. In this module, we're looking at optimization problems. Right. Now optimisation by CP is incredibly naive. Basically, look for a solution and, once you find it, add a constraint saying, "Find me a better solution." Wow. So we are essentially solving optimisation using satisfaction. Exactly. And there's two kind of ways of doing that. Either the retry optimization, which is sort of like, when we find a solution, start from the start, go back and restart with this new constraint. Or we can use branch and bound, where we're going to do one search of the whole big search tree, but we're going to keep track of the best solution we found so far, so we don't end up doing, maybe doing too much work. Either of them could be better, in different circumstances. I remember in module number one, we were talking about search strategy. It turns out that search strategy is even more important in solving optimisation problems. Absolutely. Because basically, when we are doing an optimisation problem, we want to find that the first solution, want it to be good. The sooner we find good solutions, actually we can reduce the amount of search we need to do in the rest of the search. Basically, do less. And so, if we find a good solution first we actually end up doing less work. And so, it's even more important to have good search strategies when we're doing optimisation problems. So we also looked at heavy tail behaviour of search when we are trying to solve difficult problems. So heavy tail behaviour, remember, is where, sometimes, I can find a solution very, very quickly. I can solve the problem very, very easily. But then, a very similar problem, almost identical problem with just slightly different data can take a really, really long time. How can we deal with heavy tail phenomena? So we saw that we can use restart here. So if we find that we're taking a long time, we can guess that we're in one of these bad cases and just restart the search and hope that we then fall into an easy cases. And we see, actually, that if we use restart, we can do a lot better because we avoid getting stuck in these bad cases where we've made bad decisions early on in the search and can't get out of them. Okay. So I suppose when you're doing restart, in the next run of the search, we better make use of a different search strategy. Or maybe even a randomized search strategy. Absolutely. So if we're doing a restart, we ideally want to, we definitely want to be doing exactly the same decision we did it last time. So we want to use some kind of randomisation, or some kind of search which depends on data that we've collected, so that we try a different thing at each time. And there we saw a new kind of way of doing that. Right, I mean that's failure driven kind of search strategy. Exactly. And these failure driven approaches, where we basically concentrate on the variables which are causing us the most difficulty, causing the most failure, turned out to be very, very powerful. Because they make the search on the hard part of the problem. And then once that's solved, then, maybe we can solve the rest without difficulty. And that tends to make also the search base very small. So these failure driven searches are very important addition to our toolbag of different searches. So one of the examples is domW degree? Absolutely, and that was the basic one that we used throughout this. But there are other ones and better ones too. And it combines and works very well with restart? Absolutely. Then finally, we looked inside some of our propagators. Of course, one of the great advantages of constraint programming solvers is they have global propagators that encode some combinatorial substructure for you. You don't have to think about them. And then they use complicated algorithms inside to do their inference. Again, I suppose the trade-off is the strength of the propagation and the speed we can do the propagation. Exactly. And so we looked at alldifferent, where we could do the most propagation possible. It's a domain propagator. It used a complicated matching algorithm. Right? But then we also looked at cumulative, which is for doing scheduling kinds of problems. And there, it's a very, very powerful propagator but it actually doesn't do the most propagation possible. In fact, we never use that kind of thing. So when we get to big hard problems, we're just trying to get this good trade-off. How much propagation for how much speed. So actually, in dealing with global constraints and global propagators, many a time, we do not enforce the strongest propagation possible? Yes, rarely infact. Right. Alldifferent is one of the cases where we do, but that's not one of the most common cases. How about the workshop? Yes, so what's happening in the workshop Jimmy? Well, you know, Shennong prepared the herbs, and now he has to turn the herbs into potions. So, he has to hire a whole bunch of alchemists, and his job is to really assign the number of alchemists who has to work every day. And, of course, he would like to minimise the number of alchemists that he will have to hire. Right. And again, this is a different kind of workshop than we've seen in previous modules. We're going to give you the model this time, and actually we're going to actually give you a module with three copies of every constraint. Because there are three different viewpoints on the decisions we can make. And you're going to have to do two things simultaneously. You're going to have to pick which of the representations you should use for every constraints and what's the best search strategy to put those constraints into work, and find the solutions as quickly as possible. That is multiple modeling that we learned in course number one. Exactly. Multiple modeling in action, and multiple modeling interacting with search. And so you will check be able to check which is the right way to do the search and which is the right way to represent decisions. So I suppose in this workshop here, our learners will be able to experiment with some of these advanced search strategies, including domW degree, and even restart. Absolutely. Different restart policies? Exactly. We expect you to play around with all the new kinds of search that we taught you in this module.