Having that optimality gap in mind. Now, there is a general way to test the so-called average case performance of a heuristic algorithm. That's our VSD is somewhat different from testing one solution. Now we want to test an algorithm. We somehow need to generate a lot of instances. We typically randomly generate these instances. Once we generate a lot of instances, a lot of testing cases, we input all these instances into an algorithm that is proposed by us. Also we do an exact algorithm, for example, branch and bound to find the optimized the objective value. So this gives us z star, this gives us Z ARG. The for case i, we get the star i and z average i. That's going to help us to calculate all those percentage arrows. Or precisely, suppose we are talking about a maximization problem. Then we're going to do that for each instance. We're going to do that for each instance to calculate this percentage error. For all these percentage arrows, we sum them up and then divide everything by n. We take average percentage errors. For the first instance that is 11 percent the second 22 percent, the next one three percent, and so on. We try to do the average percentage error. If we have two algorithms where one has lower average percentage error, then we say, well, it seems to us that this algorithm performs better. It has a better average case performance. However, there's also one thing that is very interesting. The thing is that if you are doing this, then if you are able to find any exact solution, what's the point of using the heuristic is pretty much because the only possibility for you to find an optimal solution is for your problem to have small scale. For example, if it is a knapsack problems, maybe it only have 20-30 variables instead of thousands. We somehow hope to text the performance of a heuristic with realistic situations. We may need to test our heuristic in large-scale problems. But if that's the case, when we generate a lot of large-scale random instances, there's no way to solve for an optimal solution. There's too time-consuming. People do another way. People is going to find an upper bound for a maximization problem. People try to find an upper bound of the objective value of an optimal solution. An optimal solution cannot be four. But we may find its upper bound through what, for example, linear relaxation. You probably still remember that at the very beginning of this lecture, we say, whenever you have an integer program for a maximization problem, if you do linear relaxation is going to give you an upper bound. That's say this value is called z error. Z error is going to be no less than z star is going to be an upper bound for that optimized objective value. This z error maybe easier to be obtained. Solving an integer program is hard, but solving a linear program is much easier. You may obtain ZAR and then use ZAR as your evaluation criteria used as your benchmark. You use the ZAR to replace z star in your formula. You use this to estimate the percentage error, which is the difference between the z error, between your upper bound of your optimized objective value with your proposed solution. In this case, that average percentage error is also actually an upper bound for lat calculated with an optimal solution. Here I need to spend a few words to make the thing clear. Please give me a few seconds. One thing we want to calculate is Z star minus ZALG. This is going to be our ideal optimality gap, but this is too difficult to be calculated. That's why you use, for example, ZAR. We use ZAR to do the subtraction. Once we do that, we would agree that because ZAR itself is no less than z star. That optimality gap here is going to be that if you use ZAR is going to be greater than or equal to that optimality gap using your objective value. This is going to be true. Once we have that, pretty much, we can say this ZAR using that as your denominator is going to also be greater than or equal to that calculated by using the optimize the objective value. Well, when you see this is going to be somewhat interesting because when this is the case, this is to divide something that is larger. You need to have some derivations. For here and there, don't forget that this is one minus ZALG divided by z star. This is one minus the ZALG divided by ZAR. ZAR is larger. The whole thing here is smaller. Once you use one and a subject that the whole thing here, large or fractional is indeed larger. Hopefully you are convinced that this is the case. This is how we explain the fact that this particular average percentage error calculated was your upper bound provides again, an upper bound for the actual average percentage. That's going to give us a hint about how good our algorithm is. In this above example, let's say linear relaxation. Linear relaxation gives us a solution which is here. This is not too unfamiliar for you. You're going to have this 0.6. and that this give you 9.8. Suppose you don't have an optimal solution, you are still able to calculate the absolute optimality gap and the relative or the percentage optimality gap. This somehow means estimating the optimality gap is not so difficult because the heuristic can be done in a very short time. Your linear relaxation can also be solved in a very short time. Solving this optimality gap is much easier than solving the exact optimality gap.