2:01

but on the other axis,

I am going to move e2 hat to b,d.

I'm going to move it over to some place where e2 prime equal to b,d.

So what I've done then is I've taken my original grid here,

as well as stretching it out by a and up by d,

I've also sheared over by b.

But this area here is just the base times the perpendicular height.

This perpendicular height is d. So the area here is still ad,

so that a determinant is still ad.

I've still changed the size, the space,

the scale of the space is what the determinant is,

by a factor of ad.

But if I have a general matrix a, c, b, d,

then what that is going to do to the space,

to my original axis vectors,

is it's going to turn that into something like this.

The first one is going to be something like that, say,

and the second one is going to be something like that say.

So my new area is going to be that.

If I want to find that area,

I have to a little bit of maths.

So I've done a bit of maths and here it is.

What I've done is, I found the area of

this parallelogram by finding the area of the whole box here,

and taking off all the little bits around it, and I found that the area is

ad minus bc. Pause for a moment if you like and verify that that's correct.

And I'm going to denote,

if this is the matrix A,

I am going to denote that this area,

this determinant is with vertical lines here and

that's then say the determinant of A for two by two matrix is just

ad minus bc.

Now, in school when you looked at matrices,

you probably saw that you could find the inverse in the following way.

If you've got a matrix a, b, c, d,

then you probably said that you could find

the inverse by multiplying by a matrix where you exchange the a and the d,

and you took the minus sign on the b and the c on the off diagonal terms.

And you also multiplied by another number.

Let's just multiply that out and see what we get.

So when we multiply that out, we're going to get,

multiply that row by that column we'll get

ad minus bc, interesting.

And we'll get a times minus ab plus ab, so that's zero.

Here will get cd minus cd so that's 0.

And when we multiply that row by that column we'll get minus bc plus ad.

That's ad minus bc again.

Now, if I divide that through by a number, ad minus bc.

If I divide by the determinant,

I'll divide these through by ad minus bc and these will turn into one.

So now I've got the identity matrix,

and this guy time's the determinant is in fact,

the inverse of the two by two matrix.

And that's how you would have done it by school.

And this is the determinant here and that's really what the determinant is.

It's the amount that the original matrix stretched out space.

And by dividing by the determinant,

we're normalizing the space back to its original size.

That's what that determinant bit does.

Now, we could spend another video looking an extension

of the idea of elimination and back substitution,

that row-echelon idea to find out how to find determinants computationally.

But this is both tricky to show and derive, and is kind of pointless.

Knowing how to do the operations isn't a useful skill

anymore because we just type det(A) into a computer.

Thus I'll just type det(A) and my computer gives me the answer, done.

From a learning perspective,

it doesn't add much.

Row-echelon does, which is why we went through it.

So I'm not going to teach you how to do determinants.

If you want to know,

then look up a QR decomposition online,

or better yet, look in a linear algebra textbook.

Now, let's think about this matrix.

Think about a matrix A, which is 1, 1, 2, 2.

What this guy does is he takes space,

and he makes our first basis vector here into 1,1.

Makes it go there up to 1,1.

And makes our second basis vector go to this one here,

go to 2,2.

So what he's done is he's taken a space with areas,

and he has collapsed everything onto a line.

All of my y's here are going to collapse into this line,

all my x's are going to collapse onto this line.

So he's reduced the dimensionality of the space.

All of this space is going to end up,

every point in space is going to map on to some point on this line here.

Now, notice that the determinant of this matrix is going to be zero.

The area enclosed by the new basis vectors is zero,

and if I do ad minus bc,

I've got one time two minus one time two.

So the determinant of A is one times two minus one times two is nought.

So graphically, we can see that it's zero and we can compute that it's zero.

If I had a three by three matrix describing a 3D space,

then that linear dependence of one of the new basis vectors on the other

two would mean that the new space was either a plane,

or if there was only one independent basis vector, it would be a line.

In either case, the volume enclosed will be zero so the determinant would be zero.

Now, let's turn back to our row-echelon form.

Let's take this set of simultaneous equations here.

Let's take 1, 1, 2,

1, 2, 3, 3, 4, 7,

times a, b, c,

some vector is equal to 12, 17, 29.

Now, you'll notice here that if I take the sum of the first two rows,

I get the third row.

So that is row one plus row two is equal to row three.

And also if I take the columns,

if I take two times column one plus one times column two,

then I'll get two plus one is three,

two plus two is four,

four plus three is seven.

So that's equal to column three.

So if I think of these as the new basis vectors of my matrix,

then they're not linearly independent.

This one guy is a linear combination of these two,

it's twice this one plus one times the middle one.

So, this is going to be a problem.

It's going to be a thing that collapses my vector space from being 3D to 2D,

it's going to collapse every point in space onto a plane.

That's going to be tricky, let's see how.

So when I try to reduce this row-echelon form,

if I take the first row 1, 1, 3,

a, b, c equals 12.

If I take that off the second row,

then I'm going to get zero,

take the first row of the second.

I'm going to get zero there,

and I'm going to take one off of there, and I'm going to have one,

take three off of there and I'm going to have one,

take 12 off of 17 and I will have five.

And if I take row one and row two off of row three,

then I'm going to get

0, 0, 0 all on the bottom row.

And if I take 12 and 17 of 29,

I'm going to get zero as well.

So this is row-echelon form but I don't have a one here.

So I find that I'm now getting zero c equals zero.

And that's true, but it's not very useful,

I don't have a solution for c. So now I can't back substitute,

I cant solve my system of equations anymore.

I don't have enough information.

When I went into the shop what my mistake was,

when I went into buy apples and bananas and carrots the third time,

I ordered just a copy of my first two orders.

The sum of my first two orders.

So I didn't get any new information.

I just found out that the cost of my two first orders combined was 29.

So I don't have enough data to find out

the solution for how much apples and bananas and carrots cost.

My third order wasn't linearly independent from my first two,

in sort of matrices and vectors language.

So we've shown that where

the basis vectors describing the matrix aren't linearly independent,

then the determinant is zero,

and I can't solve the system.

That is because these aren't linearly independent,

I don't have a determinant with any volume,

it's now collapsed to a plane so the determinant is nought,

and that means when I try my row-echelon form,

I can't solve the problem.

And that means I cant invert the matrix, which means I'm stuck.

So in fact this matrix has no inverse,

it's what's called singular.

So, there are situations where I might want to do

a transformation that collapses the number of dimensions in a space.

I might want to do that sometimes,

but that will come at a cost.

Another way of looking at this is the inverse matrix let's me undo my transformation.

It lets me get from the new vectors back to the original vectors.

But if I have dumped the dimension,

if I have scrapped dimension by turning 2D space into a line,

or a 3D space into a plane or a line,

I can't undo that anymore.

I don't have enough information.

So I've lost some of it in doing the transformation,

I've lost that third dimension.

So in general, it's worth checking before you propose a new basis vector set,

and then use a matrix to transform

your data vectors that this is a translation you can undo.

And you do that by checking that your proposed new basis vectors

are linearly independent.