0:00

In this video, we'll put the master method to use by instantiating it for

Â six different examples.

Â But first, let's recall what the master method says.

Â So the master method takes as input recurrences of a particular format,

Â in particular recurrences that are parameterized by three

Â different constants, A, B and D.

Â A refers to the number of recursive calls, or

Â the number of subproblems that get solved.

Â B is the factor by which the subproblem size is smaller than the original

Â problem size.

Â And D is the exponent and

Â the running time of the work done outside of the recursive calls.

Â So the recurrence has the form,

Â T(n), the running time on the input of size n, is no more than A,

Â the number of subproblems, times the time required to solve each subproblem.

Â Which is T(n/b) because the input size of a subproblem is n/b.

Â Plus O(n to the d).

Â The work outside of the recursive calls.

Â There's also a base case which I haven't written down.

Â So once the problem size drops below a particular constant

Â then there should be no more recursion and

Â you can just solve the problem immediately that is in constant time.

Â Now given a recurrence in this permitted format, the running time is given by one

Â of three formulas depending on the relationship between a,

Â the number of recursive calls, and b raised to the d power.

Â Case one of the master method is when these two quantities are the same,

Â a = b to the d.

Â Then the running time is n to the d log n, no more than that.

Â In case 2, the number of recursive calls, a, is strictly smaller than b to the d.

Â Then we get a better running time upperbound of O(n to the d).

Â And when a is bigger than b to the d, we get this somewhat funky looking

Â running time of O(n raised to the log base b of a power).

Â We will understand where that formula comes from a little later.

Â So that's the master method.

Â It's a little hard to interpret the first time you see it.

Â So let's look at some concrete examples.

Â 1:47

Let's begin with an algorithm that we already know the answer to,

Â we already know the running time.

Â Namely let's look at merge sort.

Â So again what's so great about the master method is all we have to do is identify

Â the values of the three relevant parameters A, B, and D, and we're done.

Â We just plug them in and we get the answer.

Â So a remembers the number of recursive calls.

Â So in merge sort, recall we get two recursive calls.

Â 2:09

b is the factor by which the sub problem size is smaller than that in the original.

Â Well we recurse on half the array.

Â So the subproblem size is half that of the original.

Â So b = 2.

Â And recall that outside of the recursive calls, all merge sort does is merge.

Â And that's a linear time subroutine.

Â So the exponent D is 1, a reflection of the fact that it's linear time.

Â So remember the key trigger which determines which of the three cases is

Â the relationship between a and b to the d.

Â 2:38

So a obviously is 2.

Â And b to the d = 2.

Â So this puts us in Case 1.

Â And remember in Case 1 we have that the running time is

Â bounded above by O(n to the d (logn).

Â In our case d = 1.

Â So this is just O(n log n).

Â Which, of course, we already knew.

Â But at least this is a sanity check, the master method is at least reconfirming

Â facts which we've already proven by direct means.

Â So let's look at a second example.

Â The second example is going to be for the binary search algorithm in a sorted array.

Â Now we haven't talked explicitly about binary search and I'm not planning to, so

Â if you don't know what binary search is, please read about it in a textbook or

Â just look it up on the web, and it'll be easy to find descriptions.

Â But the upshot it,

Â this is basically how you'd look up a phone number in a phone book.

Â Now I realize probably the youngest viewers of this video

Â haven't actually had the experience of using a physical telephone book.

Â But for the rest of you, as you know.

Â You don't actually start with the As and then go to the Bs, and

Â then go to the Cs if you're looking for a given name.

Â You more sensibly split the telephone book roughly in the middle and

Â depending if what you're looking for is early or later in the alphabet,

Â you effectively recurse on the relevant half of the telephone book.

Â So binary search is just exactly the same algorithm when you are looking for

Â a given element in a particular sorted array.

Â You start in the middle of the array, and then you recurse on the left or

Â the right half as appropriate depending on if the element you're looking for

Â is bigger or less than the middle element.

Â Now the master method applies equally well to binary search and

Â it tells us what its running time is.

Â So in the next quiz you'll go through that exercise.

Â 4:18

So the correct answer is the first one.

Â To see why let's recall what a, b, and d mean.

Â a is the number of recursive calls.

Â Now in binary search you only make one recursive call.

Â This is unlike merge sort.

Â Remember you just compare the element you're looking for to the middle element.

Â If it's less than the middle element you recurse on the left half.

Â If it's bigger than the middle element, you recurse on the right half.

Â So in any case there's only one recursive call, so a is merely 1 in binary search.

Â Now in any case you recurse on half the array so

Â like in merge sort the value of b = 2, you recurse on a problem of half the size.

Â And outside of the recursive column, the only thing you do is one comparison.

Â You just determine whether the element you're looking for is bigger than or

Â less than the middle element of the array that you recursed on.

Â So that's constant time outside the recursive call giving us a value for

Â d of 0.

Â Just like merge sort, this is again Case 1 of the master method because we

Â have a = b to the d, both in this case are equal to one.

Â 5:15

So this gives us a recurrence,

Â a solution to our recurrence of big O(n to the d log n).

Â Since d = 0 this is simply log n.

Â And again many of you probably already know that the running time of binary

Â search is log n or you can figure that out easily.

Â Again this is just using the master method as a sanity check

Â to reconfirm that it's giving us the answers that we expect.

Â Let's now move on to some harder examples,

Â beginning with the first recursive algorithm for integer multiplication.

Â Remember this is where we recurse on four different products of n over two

Â digit numbers and then re-combine them in the obvious way using adding by zero and

Â some linear time additions.

Â In the first integer multiplication algorithm, which, does not make use of

Â Gauss's Trick where we do the four different recursive calls in a naive way,

Â we have a, the number of recursive calls, = 4.

Â Now in each case, whenever we take a product of two smaller numbers,

Â the numbers have n over two digits so

Â that's half as many digits as we started with.

Â So just like in the previous two examples, b = 2.

Â The input size drops by a factor 2 when we recurse.

Â Now how much work do we do outside of the recursive call?

Â Well again all it is doing is additions and adding by zeros and

Â that can be done in linear time.

Â Linear time corresponds to a primer value of d = 1.

Â So next we determine which case of the master method we're in.

Â a = 4, b to the d = 2, which in this case is less than a.

Â 7:02

Which with our parameter values, is n to the log base 2(4).

Â Also known as O(n squared).

Â So let's compare this to the simple algorithm that

Â we all learned back in grade school.

Â Recall that the iterative algorithm for

Â multiplying two integers also takes an n squared number of operations.

Â So this was a clever idea to attack the problem recursively.

Â But at least in the absence of Gauss's Trick where you just naively compute each

Â of the four necessary products separately.

Â You do not get any improvement over the iterative algorithm that

Â you learned in grade school.

Â Either way, it's an n squared number of operations.

Â But what if we do make use of Gauss's Trick,

Â where we do only three recursive calls instead of four?

Â Surely the running time won't be any worse than n squared, and

Â hopefully it's going to be better.

Â So I'll let you work out the details on this next quiz.

Â 7:53

So the correct answer to this quiz is the fourth option.

Â It's not hard to see what the relevant values of a, b, and d are.

Â Remember the whole point of Gauss's trick is to reduce the number of recursive

Â calls from four down to three so the value of a is going to be 3.

Â As usual we're recursing on a problem size which is half of that of the original.

Â In this case n over two digit numbers so b remains 2.

Â And just like in the more naive recursive algorithm,

Â we only do linear work outside of the recursive call.

Â So all that's needed to do some additions and patterns by 0.

Â So that puts this parameter values a, b, and d.

Â Then we have to figure out which case of the Master Method that is.

Â So we have a = 3, b raised to the d = 2.

Â So a has dropped by 1 relative to the more naive algorithm.

Â But we're still in Case 3 of the Master Method.

Â a is still bigger that the b to the d.

Â So the running time is governed by that rather exotic looking formula.

Â Namely T(n) = O(n to the log base b),

Â which in our case is 2(a).

Â Which is now 3 instead of 4, okay?

Â So the master method just tells us the solution to this recurrence of 3.

Â So what is log of the, what is log base 2(3)?

Â Well plug it in your computer or your calculator, and

Â you'll find that it's roughly 1.59.

Â So we've got a running time of n to the 1.59.

Â Which is certainly better than n squared.

Â It's not as fast as n log n, not as fast as the merge sort recurrence,

Â which makes only two workers for calls.

Â But it's quite a bit better than quadratic.

Â So summarizing, you did in fact learn a suboptimal algorithm for

Â integer multiplication way back in grade school.

Â You can beat the iterative algorithm using a combination of recursion

Â plus Gauss's trick to save on the number of recursive calls.

Â 9:54

So recall the salient properties of Strassen's algorithm.

Â The key idea is similar to Gauss's Trick for integer multiplication.

Â First you set up the problem recursively, one observes that the naive way to solve

Â a problem recursively would lead to eight subproblems.

Â But if you're clever about saving some computations,

Â you can get it down to just seven recursive calls, seven subproblems.

Â So a, in Strassen's argument, is equal to 7.

Â As usual, each subproblem size is half that of the original one.

Â So b = 2.

Â And the amount of work done outside of the recursive calls is

Â linear in the matrix size.

Â So quadratic in the end, quadratic in the dimension

Â because there's a quadratic number of entries in terms of the dimension.

Â So n squared work outside the recursive call is leaving you a value of d = 2.

Â So as far as which case of the master method we're in.

Â Well it's the same as in the last couple of examples.

Â a = 7, b to the d = 4 which is less than a.

Â So once again we're in Case 3 and now the running

Â time of Strassen's algorithm T(n) = O(n to the log

Â base) 2(7) which is more or less n to the 2.81.

Â And again this is a win.

Â Once we use the savings to get down to just 7 recursive calls.

Â This beats the naive federate algorithm, which recall would require cubic time.

Â So that's another win for clever divide and

Â conquer in matrix multiplication via Strassen's algorithm.

Â And once again, the master's method just by plugging in parameters,

Â tells us exactly what the right answer to this recurrence is.

Â 11:31

So for the final example I feel a little guilty because I've shown

Â you five examples and none of the them have triggered Case 2.

Â We've had two in Case 1 in the master method and three now in Case 3.

Â So this will be a fictitious recurrence just to illustrate Case 2.

Â But there are examples of recurrences that come up where Case 2 is the relevant one.

Â 11:49

So let's just look at the following recurrence.

Â So this recurrence is just like merge sort.

Â We recurse twice.

Â There's two recursive calls each on half the problem size.

Â The only difference is in this recurrence we're working

Â a little bit harder on the combined step.

Â Instead of linear time outside of the recursive calls we're doing a quadratic

Â 12:13

b = 2 and d = 2.

Â So b to the d = 4, strictly bigger than a.

Â And that's exactly the trigger for Case 2.

Â Now recall what the running time is in Case 2.

Â It's simply n to the d, where d is the exponent in the combined step.

Â In our case, d is 2, so we get a running time of n squared.

Â And you might find this a little counter-intuitive, right?

Â Given that merge sort.

Â All we do with merge sort is change the combine step from linear to quadratic.

Â And merge sort has a running time of n log n.

Â You might have expected the running time here to be n squared log n.

Â But that would be an over estimate, so the master method gives us a tighter upper

Â bound, shows that it's only quadratic work.

Â So put differently, the running time of the entire algorithm is governed

Â by the work outside of the recursive calls.

Â Just in the outer most call to the algorithm,

Â just at the root of the recursion tree

Â