Up to this point, we considered vectors in a state-space. Vectors represent qubits, which we use to store quantum data. But for real computations, we need more than just a store or something. You have to be able to modify it, usually according to some plan called an algorithm. We need an instrument which is able to modify vectors. Or from physical point of view, which modifies the physical state of a system. In general, an operator is a thing which maps vectors to vectors. Let us be some Hilbert space. Then an operator A is called a linear operator on Ash. If for any two vectors x and y and two scalars, Alpha and Beta. The action of A on Alpha x plus Beta y, equals to Alpha Ax plus Beta Ay. We already know that in quantum computing we deal with qubits, which means that the state spaces we usually consider infinite dimensional. For infinite dimensional vector space Ash. Any operator A in that space can be represented as a square matrix. The action of an operator A on vector x, in this case, is a simple matrix multiplication. To obtain the resultant vector, we multiply the operator matrix by the vector column, and we obtain another vector column. Each component of this resultant column is obtained by matrix multiplication of the corresponding row of the matrix A and the column x. We can consider an operator S, ordered set of linear functionals. Each row of the matrix represents some linear functional. The operator's action on the vector, is a vector which components are the results of the corresponding linear functionals applied to this vector. Good. Now let's consider some linear operator A with its matrix representation. We are going to apply this operator to the first basis vector of the orthonormal basis, that with single one as the first component, and we've all other components being zeros. What are we going to obtain as the result of this operation? Well, it's pretty obvious that we obtain the vector which is identical to the first column of the matrix A, and with any other vector of this kind is single one on a place A. The action of A on this vector will result in the Kth column of the matrix. This gives us a very simple method for obtaining the matrix representation of any linear operator. If you know how an operator acts on this particular basis, then we can construct the matrix column by column, by sequential applying to these basis vectors. This method is worth to remember, especially if you want to take another course, the introduction to quantum computing. They're going to be exercises where you will have to construct the matrix of an operator according to some circuits hint. If you understand how this hint transforms by this vectors, then you should be able to construct the operator it represents column by column. Linear operators in a Hilbert space form a special structure that mathematicians call algebra. This means two things. First, the linear operators act in a linear vector space, also form a vector space. Second, the operators can be multiplied. The result of this multiplication is a linear operator in the same space. Let's consider this in more detail. We start with the notion of linear vector space. For linear operators to form a vector space, we must properly define two operations, multiplication by its color and addition. Here on this slide you can see the definition of these two operations. The capital letters denote operators while little letters denote colors. For an operator A and number lambda, the action of an operator lambda A is the action of A and then multiplication by lambda. For the sum of two linear operators and B, it is also a linear operator, A plus B. It acts on any vector x as if we apply separately A and B to it, and then add up the results. In the matrix representation, the matrix of A plus B is a matrix, each component of each is obtained by addition of the corresponding components of A and B. The matrix lambda A is the matrix of operator A, each component of which is multiplied by lambda. Since we now have this addition and multiplication by a scholar, properly defined, we also must have the special element in this vector space. Vicious neutral with respect to addition, the 0 element, which is represented by the matrix consisting of all zeros. For any operator A, we can also define the operator minus A. The addition of A and minus A gives us this zero operator. This allows us to conclude that the metrics of minus A consists of the same elements as the matrix of A, but with a minus sign. It is easy to show that the sum operation is commutative and associative and it is distributive with respect to multiplication by a scholar. The linear operators form a linear vector space indeed. Now, what about their multiplication? The product of two operators A and B is defined like this. When we apply the operator AB to some vector x, we first apply B and then A to the result of the action of B. The operation defined like this has some very useful properties. It is associative and distributive with respect to addition, but it is very important to know and remember that in general, linear operators do not commute. It means that the order of application matters in most of the cases. In general, the operator AB is not equal to BA. However, for some pairs of operators, it is not true. To distinguish these two situations, we have a very simple instrument called commutator. The commutator of two operators A and B is itself an operator defined like this. Whenever you see in a textbook on quantum mechanics this notation, two operators placed in these square brackets you must understand that this is just AB minus BA. If it happens that AB equals to BA, then this expression is just a zero operator. It means that operators A and B commute. From the physical point of view, these operators may represent observables. Will talk about this later. As is very important to know whether some pair of observables commute or not. This commutator is simple, yet very powerful and useful thing.