In this video we will introduce matrix representation. First we'll discuss Hermitian operator. Hermitian operators have three very important properties. The eigenvalues are real. Eigenvectors are mutually orthogonal and form complete set. So we're going to look at this property is one by one. First let's suppose that a vector ket a, is a normalized eigenvector of the formation operator A, with an eigenvalue little a here. So eigenvalue equation is shown here. Now, what we're going to do is multiply the bra vector a, from the left and then it's since this operator A times ket a, is the eigenvalue a times ket a. And this is a scalar. So you take that outside and you're left with this inner product of a with itself, which is one because the ket we said is normalized. And so this equation simply gives you the eigenvalue a. Now, we consider the a conjugate, which is this matrix element a, A, a complex conjugate. Now if you take a complex conjugate and distributed over this, what you would do is you would switch these two bra and ket factors and take A Hermitian A joint for this operator A. Now, because these bra and ket are both a. If you switch those two, you still get a eigen A, so that doesn't change. The middle operator is complex, sorry, Hermitian A joint of A. But, because A is Hermitian operator, Hermitian A joint is equal to itself. So, that's just A, which gives you the original eigenvalue a. So, for an eigenvalue a, the complex conjugate is equal to itself. And that what that means, is that the scalar quantity is a real number. So the eigenvalue is real. And then the matrix element of operator A, with respect to a two different ket a and b. Let's consider that, a and b, Are eigenvectors of operator A. So, for A we have the same usual eigenvalue equation, with an eigenvalue a. For b, we have the same eigenvalue equation with an eigenvalue b. Now, we consider the matrix element of operator A, between a and b. So you multiply bra a, from the left and ket b, from the right. And, because this operation is associative, we first consider operator A, acting on ket b. Here first, and that is equal to from the eigenvalue equation eigenvalue b times ket b and so this is equal to the inner product of a and b times the eigen value b. Now, what you can do, is you can actually do this bra a times operator A first. And you do this first and this simply is a dual of this equation, left hand side of this eigenvalue equation. Which gives you the dual of this right hand side of the eigenvalue equation which is a times the bra a. Now that, is equal to the eigenvalue a times the inner product of a and b. Now obviously these two quantities, the inner product between a and b times b, in our products between a and b times a. This should be the same, because there they are essentially the same quantity and therefore you can write this equation like this. So as long as a and b are not equal, a and b correspond to two different eigenvalues. The vector corresponding eigenvector a and b, in our product should be 0, which means that the vector a and b are orthogonal to each other. So, the eigenvectors corresponding to different eigenvalues are orthogonal. That's what we just proved. What if there is degeneracy, meaning that what if the eigenvalues a and b are equal to each other. And if you have multiple vectors that are associated with the same eigenvalue, we call that phenomenon degeneracy. And when there is degeneracy, you can always choose mutually orthogonal vectors. So, even though we just prove this orthogonality for the case of non degeneracy, we can also extend this orthogonality properties to the degenerate case, as well. Finally completeness. Now, we are going to define completeness and simply assert without proof that the eigenvectors form eigenvectors of Hermitian operator, form a complete set. So let's say that the set of vectors here, a's of ends are complete. If this set is complete, what that means, is that it is possible to express all vectors in the vector space. Any vectors as a linear combination of these vectors in this set. If we can do that, then this set is called complete. So, in the bracket notation, what that means is that take any arbitrary vector alpha, we can always express that as expansion. A linear combination of these vectors an. So c of n is a coefficient scalar coefficient multiply to this eigenvector or this particular eigenvector a of n. And if you sum that all for all vectors in the original set basis set, okay, then that should be give this vector in question alpha. If we can do this for all vectors in your vector space, the set a of n is called complete. Now, if this set happens to be orthonormal, then the coefficient c of n simply can be given by this inner product of a of n with the vector alpha. And therefore, we can rewrite substitute this expression here in a product between a of n and alpha, into c, here we can rewrite the expression for alpha as this. Okay, we have switched the order. So this, if we substitute this here, in place of c of n, you have two kets, alpha, kappa alpha and ket a of n back to that. But this is a scalar. So you can multiply scalar from the left from the right. So, you can take this, you can move the c's of n to this side, then you get this equation here. Now, here you have a outer product between a's of n and a's of n, which is a legal product. Also you have an inner product between a's of n and alpha, which is also another kind of legal product. And from the associativity axiom, you can do this multiplication first, or this multiplication first, it doesn't matter. So if we choose to do the outer product first and define this as an operator, we show that this outer product and summing it overall and simply turns out to be an identity operator. Why? Because the equation here, is alpha is equal to this quantity here, times alpha. So there is an operator acting on a ket, simply produces the same ket, or doesn't do anything to the ket. That kind of operator is called the identity operator. So we now have an expression for an identity operator I, in terms of outer product of our basis kets. If this condition is satisfied, we call this property completeness relation or closure. So, now what we can do is we can take the equation two and look at the individual term in the summation. So, let's just look at one term in the summation, which is just a's of n times a's of n. And if you take that outer product, which defines an operator multiply that to a kappa alpha. Then with a simple algebra you can show that it simply gives you a c's of n coefficient corresponding to this ket a's of n this part the vector. So, it pulls out the vector components along this particular basis vector a's of n. So, in this operator, our product of a's of n to a's times a's of n. This operator is called the projection operator. So in the analogy of geometric vectors, consider a geometric vector. And how do you select? How do you determine x component of the geometric factor? You project the vector onto x-axis, right. And that projection then gives you a vector along the x-axis. X component of that particular vector, the a's of n times a's of n. Out of product here, does exactly the same thing in our ket space. Now matrix multiplication. We will use this complete orthonormal set of vectors a's of n the eigenvectors of Hermitian operator a. We will use that as a basis set to represent operators and all the vectors in a matrix form. How do we do that? So, let's consider an arbitrary operator X here, right. And we multiply to this X operator two identity operator, one identity operator submission over m of these, our product gives you one identity operator. And then you do the same with this a's of n times a's of n outer product, nd some overall, and that's the second identity operator. So I didn't need operator by definition multiplied to an operator or vector doesn't do anything. So this equation holds naturally. Now what you can do is you take this middle term here, the a's of m bra multiply to X operator and multiply to the a's of n ket. If you take that and then you can arrange that in a matrix form like this, okay. And, so this is why, this product is called matrix element, because it is actually an element matrix element, when you represent this operator in a matrix form. So this is a matrix representation of operator X, using the basis set a's of n here. Now, the Hermicity of the operator X ensures the Hermicity of the matrix representation as well. So what we mean by that, is that let's consider a matrix element here of X operator. We take the complex conjugate and switch the order, switch the bra and kappi vector here and take Hermitian at joint of the operator, that should be the same as this. But this here Hermitian at joint of operator X, is equal to itself because the operator is Hermitian. And therefore this is equal to the a's of n X a's of n Hermitian conjugate. So what does this mean? It means, that if we switch the column and row and take a complex conjugate, it is equal to itself. Which is the condition for Hermicity of a matrix. So Hermitian operator is naturally represented by Hermitian matrix. And you can do the same for vectors. So, take a ket vector and multiply the identity operator here. And then you can take this, as a matrix element or the vector element. And you can turn that, you can arrange that in a column vector form. And this is the column vector representation of a ket vector. We can do the same with the bra vector, we will multiply bra vector from the right, the identity operator here. And then you take this matrix element or vector element, arrange them in a row vector form. And this is a row vector representation of a bra vector alpha. Now, let's consider an example of a spin 1/2 system. And in the Stern Gerlach experiment, we found that the electron spin is quantized into two possible states, spin up or spin down. So we're going to write spin up state as a kept with a plus sign and then spin down state as the ket with a minus sign. And these are the two kets representing states of spin up and spin down along z-direction. We're going to use this as our basis set and then we're going to try to express various sectors and operators using this basis set. So let's construct an identity operator first. And the identity operator is the outer product of these eigen kets. And then you summit over all possible states. We only have two states. The summation contains only two terms here. And then we take a spin operator z. Spin operator S of z, this is operator representing the spin along z component along the direction. And so the matrix element once again is defined as this. So we arranged them in this fashion. So plus the ket corresponding to spin up, represents column number one and row number one. And the spin down state represents column number two and row number two. And so you can have these two by two matrix representation for operator as S of z. Now because we are using the ket plus and minus which are eigen kets of this operator, S of z. And the eigenvalue equations are given this. Now, we haven't really talked about the magnitude of our spin. Here without proof, we will just say that the magnitude of the spin when you measure spin up and spin down, you will get h bar over two for spin upstate. And then negative h bar over two for the spin down state. So the eigenvalue equation for that S of z operator, is this for spin upstate and this for spin down state. Now using the second value equation, we can then immediately calculate the matrix element for S of z. And found that the matrix is diagonal, and the diagonal element is h bar over 2 and the 1,1 element and negative h bar over 2, for 2,2 element. And the diagonal element turns out to be the eigenvalues for plus state and the minus state. Now we can use the basis kets plus and minus, and express themselves in the basis express themselves using themselves as a basis ket. So let's see these ket plus, you multiply the identity operator to this, then you get this plus and plus here is always one. Because the ket is supposed to be normalized plus and minus or follow to each other. So this is 0, so it gives you 1,0 in the basis set, in the matrix representation using plus and minus as the basis ket. Similarly, the ket minus is expressed as 0 and 1. Now you can use these as basis kets or basis vectors. You can represent any arbitrary spend states, as a combination of these two basis factors.