After defeating Dong Zhuo, Cao Cao invited Liu Bei to enjoy a feast with him. Cao Cao thought Liu Bei was a brilliant person but also a potential enemy and wanted to suss him out during the feast. Cao Cao asked Liu Bei to consider who he thought were capable leaders of the time. Liu Bei listed several powerful warlords, but Cao Cao didn't have the same opinion on this and said that he thought that Liu Bei was actually a capable leader. Surprised at his remark, Liu Bei dropped his chopsticks. Concerned that Cao Cao considered him a threat that needed to be removed. To hide his panic and distract Cao Cao, Liu Bei explained that he dropped his chopsticks because he was dissatisfied with how the four dishes and four types of wine provided for the feast were prepared. Each combination of food and wine produce different satisfaction ratings. Cao Cao asked Liu Bei how he would pair these to achieve the greatest satisfaction. In secret Liu Bei pulled out his magical tablet to work out the best pairings. Liu Bei and Cao Cao are sitting down to dinner, when Liu Bei comes up with the idea of pairing off the dishes with wine in order to move the conversation away from some awkward moment. So, let's think about the joy of pairing foods with wine. We know that various foods go better with different wines. So here we have four foods. We've got chili fish head, mapo tofu, snake soup and gong bao frog and four different wines. And the number of thumbs up is how good they pair off. And you can see that with different pairings they pair off better. And so this is another pure assignment problem, right? We're determining again a function from a domain to a codomain, here the domain is the food and the codomain is the wine. We're going to pair up each dish with a different drink. And our objective is to maximise the joyfulness of the culinary, in this case obviously also, political occasion. Here's our model, it's a straight forward partitioning problem, so we've seen these before. We've got our food enumerator type, our wine enumerator type, and basically the important data is for each combination of food and wine, how much joy is there in the combination. And then our decisions are basically for each food, which wine do we drink with it. And we have one constraint, which is that we have to drink a different wine with each food. And then the objective is just summing up the joy that we get out of that, so we're summing for each food minus the joy of with that food drinking the drink that we drank without food. There's our model, it's fairly straight forward and we can run that model and we can find that if we make this particular combination, we'll have the chili fish head with the huadiao wine and the gong bao frog with the gaoliang wine and we got a joy of 21. All very straight forward, but we should look at this problem there's something different going on here. Often in discrete optimisation problems we have multiple view points on the same problem so we can build two, or more even, complementary and distinct models, two different ways of looking at the same problem. And that also means that we can combine them and there can be advantages in combining two different models to the same problem. Let's think about a different model of this problem. For this particular model, this particular assignment problem was special because the domain, this is the number of food and the codomain, the wines, had the same number. This was a bijection problem. We have basically had to match each food with one wine, that's one way to think about our matching but it was a complete matching, we can think about it in the opposite way. Since we had to match each member of the domain with one member of the codomain, we also had to match each member of the codomain with one member of the domain. This gives us a different way about thinking about the model. We can equally well thinking about it as mapping from the codomain to the domain. A bijective function has these two viewpoints. We had our usual function, a domain mapped to the codomain this function here, we can have this inverse function view, where we have an array, mapping from the codomain to the domain in this inverse view. So as much as we can pair drinks with food, we can also pair foods with drinks. And in our cooking wine problem, the inverse function is just this one. For each wine, we're going to have a var food, so the decision about which food to eat with that wine, and it's the same problem. We can build the cooking wine problem the other way around. So all that's changed is the bit here, so here we're saying, for each wine, what food do we eat with that wine? And obviously we have an all different constraint because we have to eat a different food with each wine, and we have the objective, just written in a different way. We're summing for each wine the joy of that wine, which was, what we ate with that wine, the joy of eating this pair of what we ate with that wine, with that wine. And that's the same problem, and if we run it, what we going to get? Surprisingly enough, the same answer, okay? Because we've just solved the same problem. But we got a different view point, we flipped at one view point from matching food with wine and another one of matching wine with food. Which model is likely to be better? Right? Basically there are two different ways of writing down, we had this constraint with this objective and this constraint with this objective. Can we see any difference? Well, really they look rather much the same here don't they. The difference will come when we see differences in the problem. Some constraints the problem maybe easy expressed using the function and some maybe easy to express using its inverse. One of the possibilities we can do is just use both models so we can have both models available and when we do that, then we have to make sure that they both agree. We cannot just solve two problems completely independently and have two totally different solutions. What we have to do is make sure that the two functions that we build both agree. And the way we do this is basically by adding channeling constraints. This says if we eat with this wine, we eat this food then we better drink when this food, we better drink this wine. Right, so it's very straight forward basically saying that the decision we make here and this is matching the food with this wine better equal decisions we make when we matching a wine with this food, all right. Just making sure that the two sets of decisions, the two viewpoints on the problem agree with each other. This channeling constraint is going to force that to happen. Now in fact, because this is a common thing to do, there's a global constraint that's going to capture this combinatorial substructure. It's a common thing to do, so we're going to have a global constraint to do it for us. And that global constraint is called inverse because exactly what we're doing here is writing down a function, and it's inverse so we want to make sure that the relationship between the eat function and the drink function is that they are inverses. And indeed we could equally write it down this way, the drink function and the eat function are inverses. We can replace that constraint here by inverse, which is a global constraint, which will do the same thing. And when we do that, in fact, we can also remove the all different constraint. Because they're made redundant by the inverse. Remember, once we force these things to be inverses, then that's going to force them not to map to the same thing because then they couldn't be inverses, a bijection by it's nature, if these are inverses, they must be both bijections, and a bijection is what all different is, or an injection is what all different is encoding. And these are both forced to be bijections because of the inverse constraint. But then you can answer why would we want to combine models? Here's our combined model. We've got both the drinks decisions and the eats decision and we've combined them with the inverse constraint and we can pick which of the objective functions we want. Here we've chosen to write the objective function in terms of food but comes with that, we could have written in terms of wine, it wouldn't have mattered whichever way we wanted to write the objective function didn't matter. Not very great deal in writing the combined model for this simple problem here. But we can combine model if we have some more constraints, the point of the combined model is that some constraints are going to be very easy to express with one viewpoint of the problem and other constraints are going to be easy to express with another viewpoint on the problem. And so, with that pure assignment problem, the cooking wine problem, there was no need for this, but let's add some side constraints to make it worthwhile. An example side constraint would be this, let's say that every food has a taste, a richness of taste, and we might want to say that the dish that's paired with gaoliang should be richer in taste than that paired with grape, because the gaoliang is a very, very strong wine, so we want to have a rich taste food paired with that. Now we can write that in the eat model, where we're saying, what is the food paired with gaoliang, it's very easy to write down. That's very straightforward, we just talk about the taste of the food that's paired with gaoliang is bigger than the taste of the food paired with the grape. That's very, very straightforward. It's difficult if not almost impossible to write that in the food wine model because we just don't have the notion of what is the dish paired with gaoliang in that model. There's an example where it's very good to have these eat variables to write down this constraint. Very hard to write it down with the drink variables. Here's another example, let's have the alcohol content as another piece of data we can have. We might want to have that the drink that we pair with the gongbao frog should be stronger in our colon with the snake soup because the gongbao frog is a very strong taste, it's very spicy. You might have want to have a lot of alcohol paired with that. And we can simply write that down. The alcohol of the drink for snake soup is less than the alcohol of the drink for the gongbao frog, very straight forward to write down. With the drinks decision variables, very hard to write down with each decision variables, if not impossible more or less. You can see that if we had both of these site constraints, then now we need to have both of these models together to be able to write them down, both of them. There's other kinds of side constraints. Like this one, let's say mapo tofu should be paired with rife if and only if the snake soup is paired with a huadiao wine. And in that case, well, it doesn't really matter which model we go, right. This we can model in lots of different ways, we can say that we eat rice with mapo tofu or when we eat huadiao with snake soup so that's using the eats model or we could write in the drinks model, or we can in fact write in a mixed model. All these ways of representing these constraints, this side constraint could be modelled in each model or in the mix model, doesn't really matter. Some side constraints it won't matter which model, which view point on the model we're using but you can see some of the early constraints where we really wanted a different view point to be able to express that constraints succinctly and indeed succinct constraints typically solve better. Multiple viewpoints of problems leads to multiple models, and those different viewpoints can express different constraints more naturally and more succinctly, and the key point is if you can express the constraints succinctly and usually that'll make it easier for solvers to work on. Often having two viewpoints in the single model can be very worthwhile. And if you do that then you need a channeling constraint to make those viewpoints agree. And if it's a common channeling constraint, then there'll be a global constraint to do it for you like inverse in this example here. And combining those models can sometimes improve the solving efficiency even on a single model.