We also observed that each level j subproblem is past

an array as input which has length n over 2 to the j.

And we know that the merge subroutine, when given an array of size

n over 2 to the j, will execute at most 6 times that many number of lines of code.

So to compute the total amount of work done at level j,

we just multiply the number of problems times the work done per subproblem.

And then something sort of remarkable happens.

We get this cancellation of the two 2 to the j's and we get an upper bound

6n which is independent of the level j.

So we do at most 6n operations at the root.

We do at most 6n operations of level one, at level two, and so on okay.

It's independent of the level.

Morally, the reason this is happening because of a perfect equilibrium between

two competing forces.

First of all,

the number of subproblems is doubling with each level of the recursion tree.

But secondly, the amount of work that we do per subproblem is halving with each

level of the recursion trees.

And so those two cancel out.

We get an upperbound 6n, which is independent of the level j.

Now, here's why that's so

cool, right, we don't really care about the amount of work just at a given level.

We care about the amount of work that MergeSort does ever at any level.

But if we have a bound on the amount of work at a level which is independent of

the level, then our overall bound is really easy.

What do we do?

We just take the number of levels.

And we know what that is.

It's exactly log base 2 of n + 1.

Remember, the levels are zero through log base 2 of n inclusive.

And then we have an upper bound 6n for each of those log n plus 1 levels.

So, if we expand out this quantity, we get exactly the upper bound that was claimed

earlier, namely that the number of operations MergeSort executes

is at most 6n times log based 2 of n plus 6n.

So that my friends, is a running time analysis of the merge sort algorithm.

That's why its running time is bounded by a constant times nlogn,

which especially has n grows large, it is far superior to the more simple

iterative algorithms like insertion or selection sort.