0:01

Hey guys.

Â Now we are going to talk about those mysterious beasts named

Â eigenvector and eigenvalues.

Â And just to make things easy so we don't have to write things out too much,

Â we're going to abbreviate eigenvector with ev with a little vector sign.

Â An eigenvalues as just ev.

Â So, eigenvectors, eigenvalues.

Â Okay, so what are these things?

Â What are these mysterious beasts.

Â Well, first of all, the eigenvalues and

Â eigenvectors are properties of a matrix.

Â Specifically, they're properties of a square matrix.

Â So, an n by n matrix.

Â And it will turn out that the eigenvalues

Â are invariant to a change of basis.

Â So, we saw in the change of basis video that you could pick

Â a new coordinate frame, a new basis and represent your matrix in that basis.

Â And different representations will give you different

Â groups of numbers in your matrix.

Â However, the eigenvalues of the matrix

Â are the same regardless of what representation your matrix is in.

Â 1:29

So, that's going to be a very useful property later on.

Â Okay, but since we're taking about matrices,

Â let's remind ourselves of what a matrix is, or what it does.

Â So, what does a matrix do?

Â Let's say we have the matrix A, what does it do?

Â Well, a matrix takes in vectors, that's v1, and maps in to other vectors.

Â So, maybe when you multiply A by v1 you get this new vector, Av1.

Â 2:12

So, what the matrix A is doing to each vector is

Â rotating it by a certain amount and then stretching or shrinking it.

Â So, for V1, it rotates at by maybe 30 degrees and

Â then doubles its length and something similar for V2.

Â Maybe it rotates it by 20 degrees and then scales its length by two and a half.

Â So, what does A do?

Â It rotates and stretches or shrinks vectors.

Â 2:46

So, one way to talk about A is to make a big list of all the vectors you can

Â possibly think of and then say how much each is rotated and

Â each one stretches or shrinks when acted upon by A.

Â And so, you can do that if you have a lot of time on your hands, but it's really not

Â a very good idea because the list of all possible vectors is infinite.

Â Instead what we can do is to concentrate on a very specific set of

Â vectors or a very special set of vectors if you like.

Â And talking about how each one of those rotates or stretches

Â will actually completely describe what A can do to any arbitrary vector.

Â So, let's say we have our matrix A and

Â we're trying to figure out what the heck is going on with it.

Â And we know that if you have just some lame vector over here, v sublime.

Â When you multiply A times B sublime,

Â 4:53

what it does is to shrink the vector.

Â So, maybe that's A times v2 special, so

Â all the special vectors, A just shrinks or stretches.

Â It does not rotate them, and so

Â what we have is that for a matrix A of dimension N x N.

Â So, remember we're dealing with square matrices now,

Â there are at least N vectors that

Â are only shrunken or stretched by A.

Â So, an A acts upon these vectors, it only stretches or shrinks them,

Â it does not rotate them and so these are special vectors and you know what?

Â In German, the work for special is eigen, or something like that.

Â So, these are called eigenvectors, and the specialness,

Â well they're special for a lot of reason.

Â But the first reason they're special is that A only stretches or shrinks them.

Â So, let's call the set of eigenvector.

Â E1 through en.

Â So, may be this special one was just equal to e1 and

Â this special two was equal to e2.

Â So, if the matrix A only stretches or shrinks eigenvectors,

Â we can write out A times an eigenvector.

Â Let's start with the first eigenvector, is equal to some constant times that vector.

Â So, the output is in the same direction as the input.

Â And we have A times e2 is equal to maybe a different constant.

Â Times that eigenvector.

Â 6:57

So, in our case e1 would have a landa one grater than one, because it got stretched.

Â And e2 would have a landa two less than one, because it got shrunk.

Â And the landas are called eigenvalues.

Â The eigenvalue is the amoun that the eigenvector is stretched by or

Â shrunken by, when it's multiplied by A.

Â 7:30

Okay, and now I'm not going to prove this to you but it's something you should

Â do for yourself at some point if you are into this math stuff.

Â But if A is symmetric, so the entries

Â below the diagonal are reflections of the entries above the diagonal.

Â That means that it's eigenvectors will be orthogonal.

Â So, if A is symmetric And we drew its igenvectors,

Â there will always be a right angle between them.

Â And that you weigh her in 3D space,

Â there would be a right angle between all of the eigenvectors.

Â All of the eigenvectors would be mutually orthogonal to one another.

Â So, what I claim is that in this case, knowing the eigenvectors and

Â the eigenvalues tells you what A will do to any vector.

Â So if we know all of the eigenvalues and the eigenvectors,

Â how can we figure out what A times an arbitrary vector will be.

Â Well, since our eigenvectors are all orthogonal and

Â we usually take, I forgot to mention this.

Â We usually normalize the eigenvectors so that they have length one.

Â 9:01

Since they're all orthogonal to one another and there are at least n of them,

Â we can write v as a linear combination of them or a weighted sum.

Â So v might be v1 times e1 + v2 times e2 plus so

Â on and so forth plus vn times vn.

Â And plugging that into our

Â equation gives us Av = A(v1e1

Â + ...+ VN, BN).

Â And we can use a distributive property to say that

Â that equals V1 times Ae1 + ...+VN times AeN,

Â where I've move to the Vs to the left of the As because they're just numbers.

Â So we can move them around without any problem.

Â But look at all these Ae1 and Ae2.

Â Since there are eigenvectors,

Â we can replace those with lambda e1 and

Â lambda e2 and so one and so forth.

Â To lambda N, sorry that's lambda 1e1, all the way up to lambda, NeN.

Â And of course keeping the v prefactor out front.

Â 10:29

So, just like that, we have figured out what Av equals,

Â just by knowing the eigenvalues and the eigenvectors.

Â Okay, okay, so let's say that we have some symmetric matrix and

Â it has a couple of eigenvectors.

Â So e1 and e2.

Â And maybe, in this basis, e1 is let's say,

Â 3 over square root of 10, 1 over square root of 10.

Â And in that case, e2 would be minus 1 over square root

Â of 10 3 over square root of 10.

Â And so that's an example of the actual numbers that you might put in your

Â description of your eigenvector.

Â 11:34

And so the actual numbers that go into your eigenvectors

Â depend on what basis you're looking at.

Â However, in a certain way, even though the numbers changed,

Â there's still the same eigenvectors.

Â e1 is going to be right there no matter what my coordinate frame is and

Â e2 is still right there no matter what my coordinate frame is.

Â But most importantly, the action of A upon

Â those vectors is going to be the same regardless of what basis I'm in.

Â So as long a I write A in the correct basis, and

Â remember I can do that by saying A in the new basis

Â is equal to some change of basis matrix, times

Â A in the old basis, times the inverse of that matrix.

Â So as long as I write my A on the correct basis,

Â it's going to do the same thing to my eigenvectors.

Â And what is it do, well it'll scale them by their respective eigenvalues.

Â So this leads us to the conclusion that the eigenvalues,

Â so lambda 1 lambda 2 all the way to lambda n,

Â are independent of the basis you're in.

Â So the actual numbers you put down in the eigenvectors will depend on the basis, but

Â the amount that they are stretched or scaled by,

Â provided you represent A in the new basis, will not change.

Â 13:30

And so, if you noticed, when we wrote out our eigenvectors

Â in terms of the actual numbers up here that was kind of messy,

Â in our standard x1, x2 basis.

Â I don't like square roots and everything like that and

Â I especially don't like negative numbers.

Â And I especially don't like it when every element of the vector has a weird

Â ugly number.

Â So the last question we'll ask is can we find a basis in which

Â the eigenvectors will be less messy?

Â And the answer is of course, we can.

Â 14:37

So this is the eigenbasis.

Â And in this case, what is e1 in the eigenbasis?

Â Well that's easy, since it's only along one direction, the first direction,

Â it will just be 1,0.

Â And what's e2 in the eigenbasis?

Â Well, it's only along the other dimension, so that'll just be (0,1).

Â So this is a much cleaner representation of our eigenvectors than

Â representing them as stuff with square roots of 10 in them.

Â 15:17

So how to think of this is as

Â the fact that the eigenvectors

Â of a matrix form the most natural

Â representation of the matrix.

Â And so why is it natural?

Â Well, what this means is that all of the special vectors,

Â so all of the vectors that just get scaled are those which lie a long

Â the axis you're using to represent your vector space.

Â So that's pretty neat, wouldn't you say?

Â And here is the icing on the cake.

Â We showed How to represent a matrix in a new basis.

Â And so if you're representing A eigen in a new basis,

Â that would be equal to x times the original a in your standard basis or

Â in your original basis, times x inverse.

Â So these are the.

Â 16:58

And so how would we actually find this change of basis matrix?

Â Well, our new basis is just the Eigen vectors.

Â So we would have e1 transposed, so a row.

Â A row vector of our first Eigen vector as the first element.

Â And e2 transposed as the second row.

Â So, in this case, what do we say?

Â We said that E1 is equal to what?

Â 3 / square root of 10, 1 / square root of 10, and

Â E2 was equal to -1 / square root of 10, 3 / square root of 10.

Â 17:55

So let's quickly summarize what we've come up with so far.

Â First of all, the eigenvectors of a matrix

Â A are the vectors that are only scaled by A, and not rotated.

Â And we have that the scaling factors are called eigenvalues, so, ev's.

Â And we also had that knowing the eigenvectors and

Â the eigen values tells you how a acts on any

Â vector because that vector because that vector can

Â be written as a linear combination of the eigenvector.

Â And lastly, we had that the eigenvector's formed

Â the most natural basis for A because they caused.

Â A and eigen basis to be diagonal.

Â 19:07

And lastly we had, this is very important, the eigenvalues

Â were invariant to a change of basis.

Â So you can pick any basis you want to represent a n.

Â And your eigenvalues will always be the same.

Â Because, remember, when we represent A in a new basis, what we're doing

Â is finding the transformation of A that preserves its action.

Â And its action can be defined by what it does to the eigenvectors.

Â Namely, multiplying them by their eigenvalues, I use.

Â So since the action of A is the same regardless of what basis it's in so

Â will the eigen values be.

Â So hopefully they're not too scary yet.

Â If you want to check if something is an eigen vector that's pretty easy we go.

Â So for example, let's say A 2, 3, 1 1.

Â And we want to check if the vector v = (1 1) is

Â an eigen vector of A, well what do we say?

Â So we can write

Â out Av = (2 1, 3 1) (1 1) and that's equal to, 5 2.

Â And 5 2 is clearly not a multiple of 1 1.

Â So v is not an eigenvector.

Â I'm not going to go through all of the details about how to

Â calculate eigenvectors.

Â These videos are more to explain the intuition behind these concepts.

Â But the starting point is to write out your eigenvector equation.

Â So A times e1 = lambda1 e1.

Â This could be rearranged such that A minus lambda times

Â the identity times e1 = 0.

Â And by solving this equation for lambda 1 and

Â e1 you get your eigenvector and your eigenvalue.

Â And in fact what you'll find is that this equation has several solutions.

Â Specifically, it will have at least as many solutions as the dimensionality of

Â your vector.

Â So if A is a 2x2 matrix and

Â it operates on two dimensional vectors you will have at least two eigenvectors.

Â So lastly just an example of why these are useful, well here are a few examples.

Â So here are some examples of when eigenvectors are useful So

Â one is it can help you decouple a set of differential equations.

Â Which is what they're used for in lecture two this week.

Â When we're finding the eigenvalues of the recurrent connection matrix.

Â It can also be used to decorrelate a random vector.

Â And so this is what we did when we were doing PCA.

Â In PCA what we had was a bunch

Â of random vectors drawn from some distribution.

Â 22:24

And maybe these vectors were all correlated.

Â So here the x1 component is very collated with the x2 component.

Â When x1 goes up so does x2 when x1 goes down so does x2.

Â And it just so happen that Eigen vectors of the co-variance matrix

Â are these random vectors were arranged such that

Â if these vectors were from a gausion distribution they would suddenly be

Â decorrelated in the new bases, the eigen bases.

Â And it's nice so

Â to work with uncorrelated random variables when you're dealing with probabilities.

Â That's another instance when eigenvectors are useful.

Â 23:06

Another example is in dynamical systems theory.

Â Specifically, in a linear system The eigenvectors tell

Â you the direction along which the motion of the system will be in a straight line.

Â And so that could be a very useful thing to know

Â when you're trying to figure out how a system behaves around a fixed point.

Â So that's all I've got for eigenvectors.

Â Again, the purpose of this was not to go through and

Â solve a bunch of complicated mathematical problems, but

Â rather just to give you a perspective of what eigenvectors were all about.

Â