We are talking about linear programming. In the linear programming is the process of formulating and solving linear programs. So, either linear programming or linear programs they are abbreviated as LP. So, from the context you will see whether we are talking about the linear programming or a program. A linear program is actually a special type of mathematical program with some kind of linear property. So let's first introduce some concepts of mathematical programs before we narrow down to linear programs. So for each mathematical program we need to know that it is composed by three basic elements, the objective function, the constraints and the so called decision variables. Basically a program is to make some decisions. So there must be some variables, some quantities that we want to decide, the production quantity, the price of your product, the inventory level and so on and so on. So these x1, x2, up to xn, they are all real numbers and we call them the decision variables, the variables that you want to choose values. Basically you have an objective function, you want to minimize some quantity, for example, you want to minimize the total cost for making your products, okay? In that case your objective function f can convert your decisions x1, x2 up to xn to some quantities, so for example x1, x2 may be the production quantity for your product 1, product 2 and so on. F calculates the total cost you need to pay. And then there are several constraints you have gi of these x values, less than or equal to bi. Typically that means all your decisions may be converted to a quantity, like the resource consumption amount for a particular resource, the amount of time you need to spend to do all the productions and so on. And then there may be some constraints that you need to satisfy. You are limited and you cannot do whatever you like because the resource is limited, time is limited or there are some regulations, okay. So technically we typically use m and n to denote the number of constraints in a number of variables, okay? Not always but typically, we do that. So, when we say they are n variables, we may call them x1, x2 up to xn. As we mentioned, they are real value to decision values. They are real values. We don't talk about complex values imaginary values because we basically want to apply a lot to practice inventory decisions, production decisions, pricing decisions, investment decisions and so on. All these are real values, okay? In some cases we may want to abbreviate the notations. So we may write x as a collection of all these decision variables. In that case we will say your x is a vector of decision variables or a decision vector. Here I need to spend some time to talk about a notation. When you see values or variables are put into a pair of square brackets like this, square brackets means a matrix, okay. So when you put things into a matrix like this, okay, then you know this is a 3 by 2 matrix. So if you only have one column this becomes for example, 3 by 1 column vector, okay? So when you put things into a pair of square brackets, whatever you put it, it is the outcome, either is a column vector or is a row vector, is up to you. When we are talking about vectors in a field of OR, typically we, by default say, other vectors are column vectors, okay? By default, all the vectors are column vectors. So when we want to create a vector x, we put all these numbers, all these x variables into a column, so that it forms a column vector. But, because if you always write vectors into columns, is going to waste a lot of space and sometimes it's not so easy to read. So from time to time, we may write them and put them into a pair of parentheses. Okay, parentheses. So if you write something in parentheses, basically we know you are talking about vectors and we know you are talking about column vectors, okay. So the two notations that we provide are actually identical. If they are put into square brackets, then you need to put them into columns so that you have a column vector. But when you put them into a pair of parentheses, typically we write it like a row, but it is a column vector, okay. So for those functions f and the gi, all of them are converting decision variables into a quantity that you care, the amount of resource consumption, the number of people you need to hire, the total cost you need to pay, the total amount of money you earn something like that. They are all again real valued functions. Finally, we know that all these variables are real values and this is by default the fact. So typically, we don't really write down xj in the set of real numbers because this is somewhat redundant. So we don't do that typically. This may be the last page for you to see the specific notation. So we mentioned to you that in general, we may write things into a minimization thing subject to some inequality constraints whose side is less than or equal to, whose direction is less than or equal to. So, how comes this is general, this is because if you have a maximization function, if you want to maximize your total revenue, mathematically it is equivalent to minimize the negation of your total revenue. Okay, so you don't need to study how to solve maximization problems technically, you just need to learn how to solve minimization problems. Because all the maximization problems may be reformulated as a minimization problem. Then how about equality or greater than or equal to constraints? Well, they both may be expressed by one or several less than or equal to constraints. If you say 3 is greater than 2. Basically that just means negative 3 is destined or equal to negative 2, all right. So when you have a greater than or equal to constraint, all you need to do is to flip the sides so that you get negative gi is less than or equal to negative bi. Pretty much they are the same, all right? So you don't need to specifically worry about greater than or equal to constraints, you only need to focus on less than or equal to. For equalities, pretty much the same thing. If you say a is equal to b, then in that case a is less than or equal to b and a is greater than or equal to b at the same time. So you replace one equality constraint by two inequality constraints and of course, the latter may actually be represented as less than or equal to constraint. So as an example, suppose we want to maximize x1 minus x2, you just need to minimize negative x1 plus x2, you just flip the sign. And then if you are subject to a constraint saying that these values must be greater than or equal to negative 3, you may see that it is replaced by an equivalent testing or equal to constraint. Your negative 3 becomes 3, your negative 2 becomes 2, your 1 becomes negative 1. You just flip the sign. And the Lastly, if you have an equality constraint, you'll replace it by two less than or equal to constraints according to the both way. So pretty much, you don't need to really worry too much about maximization, greater than or equal to or equality constraints because they may all be converted, transformed into the specific format that you care about.