That was too many theorems. Maybe we need to have an example to illustrate the ideas. Let's consider the following primal and the dual pairs. I want to maximize x_1 subject to the fact that 2x_1 minus x_2 must be less than or to 4, 2x_1 plus x_2 must less than or equal to 8, and the x_2 must be less than or equal to 3. Given this, now, I'm going to consider its dual. Its dual is going to be able to minimize several things subject to two inequality constraints where they are greater than or equal to. Given this, now, I want to illustrate all the conditions we just mentioned. Consider the standard form primal linear program. We are going to add several slack variables to these three inequalities, all right? Whatever you call them, you may call them x_3 to x_5, or you may call them S1 to S3, whatever. There, C is going to be 0,0 and 0. C transpose would be considered like this, 1, 0, 0, 0, 0. Then for A, your A matrix is going to be first, these two, so you have the first part and then the other part, our full of slack variables. They would all be identity matrix. Let's try to solve the primal program and obtain and dual optimal solution. We may do the simplex method a few iterations later, we are going to reach an optimal tableau. An optimal tableau looked like this. We're going to say x_1, x_2, and x_5, they are our basis. They form the basis. So x_1 is 3, x2 is 2, x5 is 1, or you may verify this by yourself. Also, our primal optimal solution may be obtained like this, 3 and 2. Why we only write down 3 and 2? Because they are the original variables, we somehow exclude the slack variables because they don't exist in our original program. The associated objective value, z star is 3, 3 is here. The thing is that now we are done with solving the primal. Later, we're going to show you how to solve the dual without really solving a linear program. The fact, of course, that we will rely on is the C_B transpose, A_B inverse. We have C, so we have C_B and we have C_B transpose. C_B transpose would be 1, 0, 0 because we know x_b is 1, 2 and 5, 1, 2 and 5. That is how we get 1, 0, 0. For A_B pretty much is the same. A_B is column 1, column 2, and column 5 so we collect all these columns to get A_B. Maybe we would see C_B transpose A_B inverse would be a solution and that would be our y-bar transpose. Let's see whether it is really dual optimal. We may do this calculation and you may, again, verify this by yourself. A_B inverse is here, so once we do C_B transpose, A_B inverse, we are going to get 1/4, 1/4, and 0. Given these y-bars solution, we may check it is indeed a dual feasible. You may do this by yourself. They have two constraints. For both constraints, you are able to see that the solution satisfies both constraints. You may also see that it is dual objective value, w, is going to be exactly three, which is exactly z star, okay? This somehow verifies that this y-bar is indeed dual optimal because it is dual feasible and its objective value equals the primal objective value. It's really the case that y-bar is dual optimum. The interesting thing is that you don't really need to know that that x-bar is optimal. As long as you may find a solution or given y-bar here. As long as you may find a primal feasible solution, that their objective values are the same, then you have shown that y-bar is dual optimal. You don't need to find that your x-bar is optimal. This is somehow, in this example we show two things. First, C_B transpose, A_B inverse, really gives you a dual optimal solution. The second, with strong duality, we are able to verify whether one solution is really optimal or not. We don't need to run the simplex method. We don't need to take a look at whatever other conditions. All we need to know is with strong duality, there is a way to verify whether one solution is optimal or not.