Finally, we are ready to talk about linear programs. Mathematical program as we mentioned in general, maybe expressing this way we have M constraints. They are inequality constraints. We are minimizing an objective function our decision variables may be expressed as a decision vector X. This particular mathematical program is a linear program, is in LP if all these functions are linear functions. Linear functions in some cases require specific definitions and then we sometimes use definitions like there is a linear mapping, blah, blah, blah, we will skip those rigorous parts. Basically, if a function is linear, it may be expressed in this way. All your decision variables, they are put into a function in additive format. Each decision variable may have a coefficient in front of them. There is a multiplication between a coefficient and a variable, but you don't see two variables multiplying each other. They don't divide each other, there is no x_1 square or whatever thing. There is no square root and there's no sine, cosine, tangent, whatever. You only see additive terms. If that's the case, then you have this particular linear function, it may be expressed according to your sigma notation. I hope everybody is familiar with this one, this simply means a_1, x_1 plus a_2, x_2, blah, blah, blah to a_n x_n. When we have this, we say all these a_j, again, they are real values. All these a_j they are coefficients of these particular linear function. We may write a as a collection of a_1 to a_n, a again is a column vector here, then in that case, we may also express our f of x as h transpose x. A is a column vector. Once you transpose a column vector, it becomes a row vector, x is a column vector and that they all have n components. They may be multiplied together and you get the inner product which is indeed exactly these a_1, x_1 plus blah, blah, up to a_n x_m. A very quick example shown at the right is that you want to minimize x_1 plus x_2 subject to linear constraint 1 linear constraint 2, some sign constraints, you may see that sign constraint by definition is linear. This is a very natural linear program. In general, any linear program may be expressed in the following way. We are trying to minimize a linear function. We may write it as c_1, x_1 plus c_2, x_2 plus ta, ta, ta plus c_n, x_n. In that case, we may use this sum mentioned notation and the c_j are the objective coefficients. We call them objective coefficients because they are the coefficients in the objective function. We also may express all each of these constraints in this particular way. The left-hand side is always the sum of several terms, a_1, x_1 plus a_2, x_2, etc. It's just that now we have several constraints. The constraint coefficients must be distinguished from two constraints. Aij can be considered as the j's coefficient for constraint I or the constraint coefficient for xj in your constraint i. That's pretty much to the meaning of Aij is the coefficient of x j in constraint i. Lastly, your Pi here is the right-hand side value for your constraint i. This is a general way to express a linear program. We will use it from time to time. If you like vectors, then we know that this particular sum mentioned is nothing but a vector, where c is a column vector and the c transpose x simply means this summation. For each constraint here, you may also express this as A i transpose x and your A i again is a column vector which collects all the coefficients you need for this particular constraint. Lastly, if you don't just like vectors, you even like matrices, this particular set of constraints may be expressed in this way. In this case, your A is really a collection of A_11, A_12, A_13, A_14, and A_21, A_22, A_23, A_24, and so on. Suppose we only have these eight numbers, that means we have two constraints and the four variables. This is going to be multiplied with X_1 up to X_4, the column vector here. A is this particular matrix, x is this particular vector. Once you do the multiplication, they should be less than or equal to b_1 and b_2. You will have this part. This and this, the inner product gives you an inequality with respect to b_1, and then this and this with respect to b_2. Altogether, that's simply a metrics inequality and then with that metrics inequality, you are done with expressing your linear program. We will stop our general description about linear programs here, pretty much you what we are doing is that we are having several ways to express a general linear program. Either we write down all the numbers specifically, or we utilize vectors, or we utilize matrices. When we are still beginners, maybe we are more familiar with the first way but from time to time, we will gradually get used to the second way or even the metrics way. But hopefully, at least you understand they are all equivalent is just different ways to write down a single formulation.