In this video, we'll discuss change of basis. Recall, given a complete orthonormal basis set a_n as shown here. The matrix representation of an operator X is shown by here. You basically take a matrix elements, so a bra of one of the eigenkets multiply to operator, multiply to a ket of another eigenvector. If you do that and arrange them in a matrix form, that's the matrix representation for operator X. In the basis of a_n. You can also express an arbitrary ket vector in a column vector form, an arbitrary bra vector in a row vector form. Now, we define a unitary transformation which allows us to transform one basis set to another. A convenient choice for basis set is the set of eigenvectors of a Hermitian operator. Because we already know that these eigenvectors form orthonormal complete basis set. In general, there are many choices for the basis set. If there are two orthonormal complete sets of vectors, one is a_i and the other is b_i. They are related to each other by a unitary transformation, and we define unitary transformation as this. It's the outer product of the two vectors, the ket vector of this basis vectors b_n and the bra vector of the other basis set a_n, you sum over all possible vectors. That gives you a matrix or an operator U. This is a unitary operator. It is easy to see that if you apply U to a_i, it gives you b_i. It's simply by using this definition, it's straightforward. Furthermore, U is unitary. Unitary, what do I mean by that? If you take Hermitian adjoint of operator U it is equal to its inverse. That is also very easy to show if you multiply U dagger, Hermitian adjoint of U to U, then from this definition, Hermitian adjoint of this is just switching a and b and U is just this. Then if you look at this using the associativity axiom once again, you calculate this inner product between b vectors 1st, because b is orthonormal basis set. This inner product is zero unless m is equal to n, which turned this double sum into a single sum like this. We already know this is a identity operator one. This shows that U dagger is equal to U inverse. Therefore, U is a unitary operator. Now, using the definition of matrix element, we can construct the matrix representation for the unitary transformation operator U in the basis set of a_n. U_mn in the basis set of a_n is this. But if you look at this part only that is equal to b_n. U_mn, the matrix representation of the unitary operator U in the basis set of a_n is given by this. Now, consider an arbitrary ket Alpha, and let's express this Alpha in the old basis set. By old we mean a_n. That is this. This here is the quotient and this is the eigenket or basis ket. Now multiply b_m bra to both sides of this equation. The left-hand side is simply this. Right-hand side is this, you multiply b_m bra here. But then, b_m bra is equal to a_m bra times U dagger from the definition of the unitary transformation. Now, this here is the matrix representation of the unitary transformation matrix. This is the matrix element for U dagger. Here is the column vector of Alpha in the old basis set, basis set using a_n. In left-hand side here is nothing but the column vector representation of Alpha using b_m as basis. This here shows that you can transform the vector representation of Alpha in a_n basis to the vector representation of the same ket in the b_m basis set using the unitary matrix defined as this. That will be in compact form, it can be shown like this. Now, similarly, you can construct the matrix representation or matrix element of operator A using b_n as basis. Here, what you can do is you can insert a_k, a_k outer product summed over k. What is that? That's an identity operator, if you remember. A_l, a_l outer product summed over l, that's another identity operator, so you can multiply identity operator here and here. Notice that this here in the middle is nothing but the matrix representation of operator A in a_n basis, here and here are the matrix representation of operator U and U dagger in the same a_n basis. What are we having here? The matrix representation of operator X or A in the new basis set is equal to the matrix representation of the same operator in the old basis set multiplied from the right U, unitary transformation, and simultaneously from the left U dagger. This type of operation transformation is called similarity transformation. We can see that an operator is transformed from one basis set to another through a similarity transformation. Now, one important quantity that is preserved through the similarity transformation is trace. Trace is defined as the sum of all the diagonal element, so that's a trace. If you express your A operator in an basis, and just sum all the diagonal element, that's the trace. That trace happens to be preserved even as you transform your operator into bn basis, the trace remains the same. You can see this algebra by yourself. Now, unitary transformation is fundamentally related to the problem of finding eigenvalues and eigenvectors. Consider an operator B for which we want to find the eigenvalues bn and eigenkets bn kets that satisfy this eigenvalue equation here. Now, this is a standard equation that we encounter in many cases in quantum mechanics. For example, the time-independent Schrodinger equation was on eigenvalue equation, for example. Take this eigenvalue equation for operator B, and we insert this identity operator term. Once again, a_k, a_k our product summed over k, that's the identity operator, so we can insert that, and then multiply the a_m bra from the right shown here. This will be the left-hand side. Then on the right-hand side, we simply multiply the a_m bra from the right, and you get this. Now, this equation is a matrix representation of the eigenvalue equation in the a_n basis or a_i basis. This here is the matrix representation of operator B in a_n basis. This and this are the vector representation of b_n in the a_n basis. Now, the eigenvalues can be found from the secular equation. This is the standard technique, solving matrix eigenvalue equation. You take these B minus Lambda times I, I is the identity matrix, and take the determinant to be zero and solve for Lambda here. The Lambda solution for this equation will be your eigenvalues. Once the eigenvalues are found, then you just use that eigenvalue and plug it back into the eigenvalue equation, then you will be able to find the eigenvector. This is how you solve matrix eigenvalue equation, and the eigenvectors in the a_n basis is simply the column of the unitary transformation. If you recall in the previous slide, this here, the column vector representation, this and this column vector representation of b_n ket using a_n basis set, that simply is the column of a unitary operator matrix in a_n basis set. If you express the same operator B in the new basis set, b_n, and b_n happens to be the eigenvectors of the operator B, then now you take this expression and insert the identity operator here and here. Then what this equation shows is that this here is the matrix representation of the same operator B in the old basis set, a_n basis set. This here and this here are the unitary transformation matrix U and U dagger. That we know because b_n are Eigen kets of b, this simply gives you this, and b_n and b_m are orthogonal to each other, and that gives you Delta mn. The resulting matrix will simply be a diagonal matrix with the diagonal elements being the eigenvalues. If you set up a matrix using the eigenvectors, its own eigenvectors as a basis, you always get this diagonal matrix with diagonal elements being the eigenvalues, you achieve that by taking the old matrix and apply a similarity transformation using the unitary transformation matrix that transforms the old basis set into the new basis set made of its own eigenvectors. In this sense, a lot of time, you will hear people saying then I'm going to diagonalize this matrix. What they usually mean is that they're going to solve the eigenvalue equations. When you solve the eigenvalue equations and using the new found eigenvectors as your basis set, your matrix is diagonal, and so diagonalizing a matrix is mathematically equivalent to solving eigenvalue equations, and people actually use these two terms, solving eigenvalue equations and diagonalizing a matrix in an interchanging way for this reason.