Before we talk about algorithms for matrix multiplication let me just make sure we're

all crystal clear on exactly what the problem is so let's just actually spell

out what would be the result of multiplying two different 2 x 2 matrices.

So we can parameterize two generic 2 x 2 matrices by just giving the first one

entries a, b, c, and d.

Or these four entries could all be anything.

And then, we're multiplying by a second 2 x 2 matrix.

Let's call its entries e, f, g, and h.

Now what's the result of multiplying these?

Where again, it's going to be a 2 x 2 matrix where each entry is just

the corresponding dot product of the relevant row of the first matrix and

column of the second matrix.

So to get the upper left entry we take the dot product of the upper row of the first

matrix and the first column of the left column of the second matrix so

that results in ae + bg.

To get the upper right entry we take the dot product of the top row of

the left matrix with the right column of the second matrix, so

that gives us af + bh and then filling in the other entries in the same way,

we get ce + dg and cf + dh.

Okay, so that's multiplying two matrices, and

we've already discussed the definition in general.

Now, suppose you had to write a program to actually compute the result of

multiplying two n x n matrices.

One natural way to do that would just be to return to the definition,

which defines each of the n squared entries in the z matrix as a suitable sum

of products of entries of the x and y matrices.

So in the next quiz I'd like you to figure out exactly what would be the running time

of that algorithm as a function of the matrix dimension n.

Where, as usual, we count the additional multiplication of two individual entries

as a constant time operation.