[MUSIC] Imagine that we have some vector phi and we also have the representation of phi in sum orthonormal basis E with vectors e1, e2, etc en. And the components alpha1, alpha2 etc. We are for some reason severely interested for the representation will have this vector phi in another basis. Say S with vectors s1, s2 etc. Let's represent phi as a sum of its components multiplied by the vectors of the basis E. Inside this sum we can place the identity operator, since identity changes nothing. And as we remember, the identity operator can be substituted by the sum of projection to sum orthonormal basis. Here I choose the basis S. This last step is justified by the closure relation we discussed in the previous video. If you carefully observe this expression, you'll find out that there's no more vectors of basis E in it. They got into this color products and they contribute this color coefficients. The only vectors that remain vectors are those of the basis S. It looks good, but we're going to rewrite it even better. And represent the whole thing as a column vector but now in the basis S. This expression is still not good enough, so I decide to decompose it a bit more. And represent it as a matrix multiplication of this nice looking matrix and the initial vector phi. You can check that I did everything right by going backwards in this reasoning. You multiply this matrix by the column phi and you obtain the column vector from above, which is the matrix representation of the expression above, etc. Good now why we did all that, because now I have the convenient way of representing any vector in the basis S. If I have some column representation of some vector in the basis E. I can just apply this matrix and obtain the column representation in the basis S. So this matrix is my basis change matrix. I will use the capital U to denote it. Now what if we want to do a reverse operation? Suppose we have our vector C in the S representation and we want to obtain the components of C in the E representation. Let's assume we forgot them. Since we have this basis conversion matrix, it is really easy to do. But before applying it to vector C we need first to calculate its argent. That is to transpose it and to conjugate these components. If you do so, you may notice that U* the argent of U is always reverse. If you multiply U* by U from left to right, doesn't matter, you obtain the identity operator. It means that U* conceals the action of U as well as U conceals the action of U*. And if we applied U to sum vector to obtain its representation in the basis S, then U* reverses this separation. And we obtain the initial representation E. The operators who's argent also is reverse are called unitary operators. As with just discovered, they transform an orthonormal basis to orthonormal basis. They also don't change the length of vectors and they preserve angles. Good examples of such operators are rotations and reflections. Now important point about this type of operators is this. Any evolution of a quantum system except for the measurement is described by unitary operator. It means that when we are going to perform computations on our quantum states, that is to modify them we are only allowed to perform unitary transforms. We cannot apply just any linear operator we want. All quantum gates we can implement are unitary. This is quite a strong restriction. Sometimes there is a very strong desire to implement some gate which is not unitary. For example, to copy an unknown quantum state, but we just can't. It is the fundamental law and we cannot break it, no matter how we want it. Physically, this restriction means that one cannot distinguish the evolution of a quantum system from evolution of its environment. Basis change and it is what the system does when it evolves, can be performed outside of the system. For example, if you consider the qubit encoded by a photon polarization then it does not matter whether we rotate the frozen polarization access by angle theta or we rotate our analyzer by angle minus theta. The result of both these actions is the same. Now, what about the quantum gates we met before in the previous episode? Can we use them in quantum computing? Are they unitary? Yes they are, you can easily check it yourselves by computing the joints and performance some matrix multiplications. You may even remember that all these quantum gates are self-adjoint. But I don't want to steal your pleasure of doing this exercise. Instead of that, I want to introduce you one more unitary omission operator. People know it by different names. Some call it CX, some conditional NOT. And we're going to call it CNOT. As always, this new operator is extremely important. In this course I don't even mention unimportant things. First of all, it has matrix 4 by 4 which means that it acts in the states base of two qubits. And second, it modifies the value of the second qubit based on the value of the first qubit. It flips qubit 2 only if qubit 1 is in the state one. I remember that I already mentioned this separator when I explained you why we need the entangling operators. Operators which in some sense make qubits interact. Without these kinds of unitary transforms, we can only perform the trivial computations. So this quantum gate is our gate to the world of almost unlimited computing power. And unfortunately this is the gate whose physical implementation remains the most challenging task for physicists and engineers nowadays. So as an exercise, check its unit editor. And if you want more exercises, find its eigenvalues and eigenvectors. And enough for this week. Goodbye.