0:00

[MUSIC]

Â Okay, so let's put all this together.

Â Let's use our transformations knowledge and

Â our basis knowledge in order to do something quite tricky.

Â And see if we can't actually make our life quite simple.

Â What I want to do here is know what a vector looks like when I reflect it in

Â some funny plane.

Â For example, the way this board works, when I write on the whiteboard here,

Â if you're looking at it, all the writing would appear mirrored.

Â But what we do to make that work is we reflect everything in post production,

Â left-right, and then everything comes out okay.

Â The example we're going to do here asks what the reflection of Bear, say,

Â something in a mirror would look like to me if

Â 0:45

Now my first challenge is going to be that I don't know the plane of the mirror

Â very well.

Â But I do know two vectors in the mirror, (1, 1, 1) and (2, 0, 1).

Â 0:59

And I've got a third vector, which is out of the plane of the mirror,

Â which is at (3, 1, -1), that's my third vector.

Â So I've got vectors v1, v2, and v3,

Â and these two guys are in the plane of the mirror.

Â We could draw it something like v1 and v2, and

Â they're in some plane like this, and v3 is out of the plane.

Â So I have got v3 there, v1, and v2.

Â So first let us do the Gram-Schmidt process and

Â find some orthonormal vectors describing this plane and its normal v3.

Â 1:35

So my first vector e1 is going to be just the normalised version of v1.

Â v1 here is of length root 3,

Â 1 squared plus 1 squared plus 1 squared all square rooted.

Â So it's going to be over root 3 times (1, 1, 1), that's a normalised version of v1.

Â So then we can carry on, and

Â I can find u2 = v2- some number of e1's.

Â So then the sum number's going to be a projection of v2 onto

Â e1, times e1.

Â So that's going to be (2, 0, 1)

Â -(2, 0,1) dotted with e1, which is 1 over root 3,

Â (1,1,1) times 1 over root 3 (1,1,1), because that's e1.

Â So that's (2, 0, 1) minus, the root 3s are going to come outside,

Â so I can just have them being a third.

Â (2, 0, 1) dotted with (1, 1, 1) is 2 plus 0 plus 1 is 3, so

Â that actually goes and has a party and becomes 1.

Â And yeah, okay, I confess, I fixed the example.

Â So it's (2, 0, 1)-(1,1,1),

Â which is going to give me (1, -1, 1-1 is 0).

Â So (1 -1, 0), that's u2.

Â Now if I want to normalize u2,

Â I can say e2 is equal to the normalised version of u2.

Â 3:37

So that's going to be (3, 1, -1)-

Â (3,1,-1) dotted with 1 over root 3 (1, 1, 1)],

Â and that's a number.

Â And it's going in the direction of the unit

Â vector v1- v3 dotted with e2.

Â So that's (3, 1, -1) dotted with 1 over root 2

Â (1, -1, 0), times e2,

Â which is 1 over root 2 (1, -1, 0).

Â So it's quite a complicated sum.

Â But I've got an answer here which I can do.

Â (3, 1, -1) minus,

Â the 1 over root 3s come out again,

Â 3 plus 1 minus 1 is 3, so that goes.

Â Then I've got (1, 1, 1) there, so that becomes 1.

Â Minus, the halves are going to come out, the 1 over root 2s.

Â I've got 3 minus 1 minus 0, so

Â that's 2, so they cancel and become one again.

Â As I said, I fixed the example to make my life easy.

Â 5:05

So then I've got (3, 1, -1) -(1, 1, 1)-(1, -1, 0).

Â So therefore, I get an answer for u2 being

Â (3, -1, -1), so that's 1.

Â 1 minus 1 is 0, minus -1 is plus 1,

Â -1 minus 1 is -2.

Â 5:33

And so I can then normalize that and get e3.

Â So e3 is just the normalised version of that,

Â which is going to be 1 over root 6 of (1, 1, -2).

Â 5:46

Now let's just check, so (1, 1, -2) is normal to

Â (1, -1, 0), it is normal to (1,1,1).

Â Those two are normal to each other, so they are a normal basis set.

Â 5:58

Just need to make sure that's 1 over root 6, they are all of unit length.

Â So I can write down my new transformation matrix, which I'm going to call e.

Â It's the transformation matrix described by the basis vectors e1, e2, e3.

Â So I've got e1, e2, e3 all written down as column vectors.

Â And that's going to be my transformation matrix, that first contains the plane,

Â notice, and then contains the normal to the plane.

Â 6:29

So I've redrawn everything just to get it all more compact so we can carry on.

Â We've got our original two vectors v1 and v2, and

Â we've defined e1 to be the normalised version of v1.

Â And we've defined e2 to be the perpendicular part of v2 to e1,

Â normalised to be of unit length.

Â So these all are in a plane, and then e3 is normal to that plane.

Â It's the bit of v3 that we can't make by projecting on to v1 and v2,

Â then of unit length.

Â 7:01

Now say I've got a vector r over here, r.

Â Now what I want to do is reflect r down through this plane.

Â So I'm going to drop r down through this plane.

Â There he is when he intersects the plane.

Â And then out the other side to get a vector r prime.

Â And let's say that r has some number like, I don't know, (2, 3, 5), (2, 3, 5).

Â 7:32

Now this is going to be really awkward,

Â this plane's off at some funny angle composed of these vectors.

Â And even these basis vectors adds up some funny angle, and

Â then how do I drop it down and do the perpendicular?

Â There's going to be a lot of trigonometry.

Â But the neat thing here is that I can think of r as being composed of

Â a vector that's in the plane, so some vector that's composed of e1s and e2s.

Â 8:09

And when I reflect it through the bit that's in the plane is going to be

Â the same.

Â But this bit that's some number of e3's,

Â this bit here, I'm just going to make into minus this bit here.

Â So if I wrote that down as a transformation matrix,

Â the transformation matrix in my basis E, is going to be to keep

Â the e1 bit the same, keep the e2 bit the same.

Â So that's the e1 bit, that's the e2 bit, and

Â then reflect the e3 bit from being up to being down.

Â (0, 0, -1), so that's a reflection matrix

Â in e3, so that's a reflection in the plane.

Â So just by thinking about it quite carefully,

Â I can think about what the reflection is.

Â And that T_E is in the basis of the plane, not in my basis, but

Â in the basis of the plane.

Â So that is easy to define.

Â And therefore, if I can get the vector r defined in the plane's

Â basis vector set, in the E basis, I can then do the reflection.

Â And then I can put it back into my basis vector set.

Â And then I have the complete transformation.

Â So a way of thinking about that is that if I've got my vector r,

Â 9:29

and I'm trying to get it through some transformation matrix to r prime.

Â But that's going to be hard, that's going to be tricky, but

Â I can transform it, into the basis of the plane.

Â So I can make an r in the basis of the plane, and

Â I'm going to do that using E to the -1.

Â E to the -1 is the thing that got me into,

Â I've got a vector of mine, into Bear's basis, remember.

Â 10:33

Then I can read that back into my basis by doing E,

Â because E is the thing that takes Bear's vector and puts it back into my basis.

Â So I can avoid the hard thing by going round, doing these three operations.

Â So r, E to the minus 1, E inverse, T in the E basis, E.

Â If I do those three things, I've done the complete transformation, and

Â I get r prime.

Â So this problem reduces to doing that matrix multiplication.

Â So we've got that, we've got that, so we can just do the math now,

Â and then we'll be done.

Â 11:12

So I've just put the logic up there so

Â that I can have the space down here for later.

Â And I've put the transformation we're going to do there.

Â A couple of things to note, one is because E, we've carefully constructed by our

Â Gramâ€“Schmidt process to be orthonormal, we know that E transpose is the inverse.

Â So calculating the inverse here isn't going to be a pain in the neck.

Â The other thing is, compared to the situation with Bear where we're

Â changing bases, here we're changing from our vector r to Bear's,

Â or actually the plane's coordinate system.

Â Then we're doing the transformation of the reflection in the plane, and

Â then we're coming back to our basis.

Â So the E and and the E to the -1 are flipped compared to the last video,

Â because we're doing the logic the other way around.

Â It's quite neat, right?

Â The actual multiplication of doing this isn't awfully edifying,

Â it doesn't build you up very much, it's just doing some arithmetic.

Â So I'm just going to write it down here, and if you want to verify it you can

Â pause the video, and then we'll come back and we'll comment on it.

Â 12:18

So this is TE times the transpose of E, then I take that and

Â multiply it by E itself.

Â And I get E TE E transpose, which is this guy, simplifies to this guy, so that's T.

Â All comes out quite nicely, so that's very, very nice.

Â So then we can apply that to r.

Â So we can say that T times r is equal

Â to T times our vector (2, 3, 5), and that's going to give us r prime.

Â And that gives us an answer of one-third of (11, 14, 5).

Â So r prime here is equal to one third of (11, 14, 5).

Â So that's a process that would've been very,

Â very difficult to do with trigonometry.

Â But actually, with transformations, once we get into the plane of the mirror, and

Â the normal to the mirror,

Â then it all becomes very easy to do that reflection operation.

Â And it's quite magical, it's really amazing.

Â So that's really nice, right?

Â It's really cool.

Â 13:24

So what we've done here is we've done an example where we've put

Â everything we've learned about matrices and vectors together

Â to describe how to do something fun like reflect a point in space in a mirror.

Â This might be useful, for instance, if you want to transform images of faces for

Â the purpose of doing facial recognition.

Â Transform my face from being like to being like that.

Â And then we could use our neural networks,

Â our machine learning to do that facial recognition part.

Â 13:48

In summary, this week we've gone out into the world with matrices and

Â learned about constructing orthogonal bases, changing bases, and

Â we've related that back to vectors and projections.

Â So it's been a lot of fun.

Â And it sets us up for the next topic, which Sam is going to lead,

Â on eigenvalues and eigenvectors.

Â [MUSIC]

Â