Okay? So that was one example. But that one example actually contains pretty much all the techniques we want to use in this particular section. That technique we just applied can actually be generalized. So let's take a look at a maximization function standing at the smaller side of an inequality. So previously we have dealt with this particular case, right? Suppose there is a variable y is greater than. Or equal to the maximum of x one and x two It is always true that we may split it into two linear functions. We just say y must be greater than or equal to x one. And y must be greater than or equal to x to. The two things must happen at the same time. And if that's the case, then you are dumped. All right? So that's pretty much one thing. These y x1 and x2 they actually can be variables parameters are a function of them. So it can be quite complicated. For example, It may be this guy greater than I go to the maximum of the first thing. The second thing, then all you need to do is to just take it. Okay. The first one greater than we'll get to the first one greater than the second one. Pretty much you just copy and paste. Then it's always guaranteed that you do not really change your formulation. Okay, and there may be actually more than two terms, you can actually have many terms in your maximum function. Then all you need to do is to replace all of them. Replace this guy by all these terms 1 by 1. Okay? If you have n terms here You create unconstraints, and then you're done. Also, if you have a minimum function at the larger side then pretty much is the same, okay? If you have a minimum function here and it stands at the large side then all you need to do is to split the first term, the second term. And the third term into one, two, and three. Use three linear functions to replace the original non linear one, then you're done. The formulation is still the same. The technique is good, but unfortunately it does not apply to the following. If you're maximizing function is at the larger site, something like this. You cannot say it is equivalent to y less than or equal to x one and y less than or equal to x two at the same time. Why is that? Because for this case, y should be less than, Then work to x 1 or y should be later on when two x 2 in some sense is not going to be end, okay? If you say this is and this is wrong, why is that? Because if x 1 is 5, x 2 is 50. Your y can be for example 25. Right? Y only need to be testing we go to one of them, y does not need to be desert in order to both of them. So in that case, if you really have this situation where your maximum function or where your maximum function stands as the larger side You actually need an or instead of an and, so the linearization then can be more complicated. But still, sometimes you know how to deal with this, right, you have a set of constraints and you only need to satisfy one of them. So, in some cases, you may introduce integer variables actually binary variables. And then you will still be able to express that thing. But one thing you know is that once you introduce a binary variable into a program. That becomes an integer program, and then the complexity becomes much higher. So if you are having this situation where your minimum function is that the larger side or your maximum function is that the smaller side. You always linearize it because that makes your program much easier. A linear one is always better than a nonlinear one. But if it's the other way In many cases. We just cannot do that because introducing binary variables may not be a good idea regarding solving the program. Okay? Finally, if you have them in equality Also you cannot play the trick easily. Sometimes we may want to linearize the objective function. For example, when we want to minimize a maximum function in this. This way, then as we mentioned in previous examples, all we need to do is to replace this guy by W. And then say W is greater than or equal to x one and the W is greater than equal to x two. Again, all these guys can be variables, parameters or function of them. There may be other constraints, it doesn't really matter and our objective function may contain other terms. For example here, suppose we want to maximize a minimum function with some other constraints, some other terms. All we need to do is to focus on this particular guy. Okay? replace it by W. And say that your W cannot be greater than the first term, second term. And the last term, okay? Use three constraints to replace the original minimum term. That's something you may do, okay? And you already know why that works according to our previous example. So I'm going to say this again. Suppose, you do this. Then pretty much for this term, you replace it by W. You want to maximize the whole thing. So you're a W, you want to make your W as large as possible. So eventually one of them would be binding. Which one, the one with the smallest right hand side. And once your W is equal to one of the smallest among them. You will say w is equal to the minimum of them. That's how your formulation is equivalent. All right. So the technique, again, does not apply to maximising a maximum function. We're minimizing a minimum function. Pretty much you just cannot do that in the previous linearization technique. Finally, an absolute value function is just a special type of a maximum function. So it can be done with the similar thing. So minimizing an absolute value function can be linearized. An absolute value function at the smaller side of any quantity can also be linearized, just like a maximum function. So, lastly I'm going to give you one example to conclude this section. Suppose I want to Deal with hospital locations. So in your country, they are cities. Each lies at location xi and yi. So these are given information. Here there's a city. I want to locate the hospital as a location x, y. So, I want to locate it somewhere. But now what I want to do is to minimize the average Manhattan distance. So in this example, we are talking about Manhattan distance instead of Euclidean distance. So what is that? If I have one point here, another point there Euclidean distance is to take the shortest on a straight line to be the route. But in a Manhattan distance, you may only move along the vertical line and horizontal line. Either you do it in a North south direction or east west. Direction. So in this particular case, the Manhattan distance would be sum of these two green lines. Okay, so what does that mean? If you go back to this example again, this is the first distance. The second one, the third one, the fourth one, the fifth one, the sixth one, something like that. So mathematically, we know how to write it down. Pretty much we are saying that, you are having several cities, all right? So you have several cities. For each city, you know its location. You also know where is our Hospital. Then the distance would be for x direction. You'll find the difference. Okay, this is x, this is your x i. So for the x direction you'll find a difference. And then you need to take Absolute value function. Because you don't know which one is larger. For Y the same thing you do it so you have y you have yi, you don't know which one is larger so you take absolute value function. And then for each city you have this, you sound of them. So the interesting thing is that this thing can be linearized, right? So all you need to do is to replace this guy by ui, this guy by vi. Then your UI should be greater than or equal to this guy, or greater than or equal to its negation. Your vi should be greater than or equal to this guy, or greater than. Or equal to is negation. That should be true for any possible cities. And once you have that, you realize, This is actually a linear program. Okay, even though this absolute value functions seems to be weird, but this is actually a linear program, which can be solved easily. So that's how linearization may help us. If we don't do linearization here, we have a nonlinear program. For this nonlinear program. Even if you don't have any constraints, it's still harder to be solved by solvers by any commercial program If we linearize it. Even if there comes constraints, still as long as this is a linear program solvers software's they will be able to solve it easily. Or at least much more easier. Okay. So linearization is useful, at least in this particular case.