0:02

In this sequence of lectures, we're going to learn Asymptotic Analysis.

This is the language, by which every serious computer programmer and

computer scientist, discusses, the high level performance, of computer algorithms.

As such, it's a totally, crucial, topic.

In this video, the plan is to segueway between the high level discussion that

you've already seen in the course introduction.

And the mathematical formalism which we're going to start

developing in the next video.

Before getting into that mathematical formalism,

however, I want to make sure that the topic is well motivated.

That you have solid intuition for what it's trying to accomplish, and

also that you've seen a couple of simple, intuitive examples.

Let's get started.

0:44

Asymptotic analysis provides basic vocabulary for

discussing the design and analysis of algorithms.

And while it is a mathematical concept, it is by no means math for math's sake.

You will very frequently hear serious programmers saying that such and such code

runs in o of n time where such and such other code runs in o of n squared time.

It's important you know what programmers mean when they make statements like that.

1:09

The reason this vocabulary is so ubiquitous is that it identifies a sweet

spot for discussing the high level performance of algorithms.

What I mean by that, is it is on the one hand,

coarse enough, to suppress all of the details that you want to ignore.

Details that depend on the choice of architecture,

the choice of programming language, the choice of compiler, and so on.

1:34

On the other hand it's sharp enough to be useful.

In particular, to make predictive comparisons

between different high level algorithmic approaches to solving a common problem.

This is going to be especially true for large inputs.

And remember as we discussed in some sense, large inputs are the interesting

ones, those are the ones for which we need algorithmic ingenuity.

For example, asymptotic analysis will allow us to differentiate between better

and worse approaches to sorting.

Better and worse approaches to multiplying two integers and so on.

2:10

Now most serious programmers if you ask them,

what's the deal with asymptotic analysis anyways?

They'll tell you reasonably, that the main point is to suppress

both leading constant factors and lower order terms.

2:24

Now, as we'll see, there's more asymptotic analysis than just these seven words here.

But long term,

ten years from now if you only remember seven words about asymptotic analysis,

I'll be reasonably happy if these are the seven words that you remember.

So how do we justify adopting a formalism which essentially by definition,

suppresses constant factors and lower order terms?

Well lower order terms basically,

by definition, become increasingly irrelevant as you focus on large inputs.

Which, as we've argued,

are the interesting inputs, the ones where algorithmic ingenuity is important.

As far as constant factors, these are going to be highly dependent on

the details of the environment, the compiler, the language and so on.

So if we want to ignore those details, it makes sense to have a formalism

which doesn't focus unduly on leading constant factors.

3:14

Here's an example.

Remember when we analyzed the merge sort algorithm,

we gave an upper bound on its running time that was 6 times n log n plus 6n.

Where n was the input length, the number of numbers in the input array.

So the lower order term here is the 6n.

That's growing more slowly than than n log n, so we just drop that.

And then the leading constant factor is the 6 so we suppress that as well.

After those two suppressions we're left with a much simpler expression, n log n.

3:44

The terminology would then be to say that the running time of merge sort is big O

of n log n.

So in other words,

when you say that an algorithm's running time is big O of some function.

What you mean is that after you've dropped the lower order terms, and

suppressed the leading constant factor, you're left with the function f of n.

Intuitively, that is what the big O notation means.

So to be clear, I'm certainly not asserting thay

constant factors never matter when you're designing and analyzing algorithms.

Rather, I'm just saying that when you think about high level

algorithmic approaches, when you might want to make a comparison between

fundamentally different ways of solving a problem.

Asymptotic analysis is often the right tool for

giving you guidance about which one is going to perform better.

Especially on reasonably large inputs.

Now, once you've committed to a particularly algorithmic solution to

a problem.

Of course, you might want to then work harder to improve the leading constant

factor, perhaps even to improve the lower order terms.

By all means if the future of your start up depends on how efficiently you

implement some particular set of lines of code, have at it.

Make it as fast as you can.

In the rest of this video I want to go through four very simple examples.

In fact these examples are so simple, if you have any experience with big O

notation, you're probably better off just skipping the rest of this video.

And moving on to the mathematical formalism that we begin in the next video.

But if you've never seen it before,

I hope these simple examples will get you oriented.

5:37

That is the code just checks each array entry in turn.

If it ever finds the integer t it returns TRUE.

If it falls off the end of the array without finding it, it returns FALSE.

So what do you think?

We haven't formally defined big O notation, but

I've given you an intuitive description.

What would you say is the running time of this algorithm

as a function of the length of the array, capital A?

6:12

Why is that true?

Well let's think about how many operations this piece of code is going to execute.

Actually the lines of code executed is going to depend on the input.

It depends on whether or not the target t is contained in the array A.

And if so, where in the array A it lies.

But, in the worst case this code will do an unsuccessful search.

T will not be in the array and the code will scan through the entire array A and

return FALSE.

The number of operations then is a constant.

There is some initial setup, perhaps and

maybe it's an operation to return this final bullying value.

But outside of that constant, which will get suppressed in the big O notation,

it does a constant number of operations per entry in the array.

And you can argue about what the constant is,

if it's two, three, four operations per entry point in the array.

But the point is, whatever that constant is two, three, or four,

it gets conveniently suppressed by the big O notation.

So as a result, the total number of operations would be linear in n, and

so the big O notation will just be o of n.

7:17

So that was the first example.

In the last three examples,

I want to look at different ways that we could have two loops.

And in this example, I want to think about one loop followed by another, so

two loops in sequence.

I want to study almost the same problem as the previous one.

Where now we are just given two arrays, capital A and capital B,

let's say both are the same length.

And we want to know whether the target t is in either one of them.

Again, we'll look at the straight forward algorithm that we just searched through a.

And if we fail to find t and a, we search through b.

If we don't find t and b either, than we have to return return FALSE.

8:05

Well, the question was the same and in this case the answer was the same.

So this algorithm, just like the last one, has running time big O of n.

If we actually count the number of operations

of course it won't be exactly the same as last time.

It'll be roughly twice as many operations as the previous piece of code.

That's because we have to search two different arrays each of link in.

So, whatever work we did before we now do it twice as many times.

Of course that two being a constant.

Independent of the input length n,

is going to get suppressed once we passed a big O notation.

So this, like the previous algorithm, is a linear time algorithm,

it has running time big O of n.

Let's look at a more interesting example of two loops,

where rather than processing each loop in sequence, they're going to be nested.

In particular, let's look at the problem of searching whether two given input

arrays each of link n, contain a common number.

9:06

The code that we're going to look at for solving this problem is the most

straight forward one you can imagine, where we just compare all possibilities.

So for each index i into the array a, each index j into the array b.

We just see a if A[i] is the the same number as B[j].

If it is we return TRUE.

If we exhaust all of the possibilities without ever finding equal elements,

then we're safe in returning a FALSE.

9:45

So this time the answer has changed.

For this piece of code the running time is not big O with n, but

it is big O of n squared.

So, we might also called this a quadratic time algorithm.

Because the running time is quadratic in the input length n.

So, this is one of those kinds of algorithms where if you double

the input link.

Then the running time of the algorithm will go up by a factor of four

rather than by a factor of two, like in the previous two pieces of code.

So why is this?

Why does it have quadratic running time big O of n squared?

Well again,

there's some constant setup costs which gets suppressed in the big O notation.

Again, for each fixed choice of an entry i into array A and an index j for array B,

for each fixed choice of i and j, we only do a constant number of operations.

The particular constants are relevant because it gets suppressed in the big O

notation.

What's difference is that there's a total of n squared

iterations of this double four loop.

In the first example we only had n iterations of a single four loop.

In our second example because one four loop completed before the second one

began we had only 2n iterations overall.

Here, for each of the n iterations of the outer four loop,

we do n iterations of the inner four loop.

So that gives us the n times n, ie n squared total iterations.

So that's going to be the running time of this piece of code.

11:21

So here's the piece of code we're going to analyze for solving this problem, for

detecting whether or not the input array A has duplicate entries.

There's only two small differences relative to the code that

we went through on the previous slide when we had two different arrays.

11:36

The first change won't surprise you at all which is instead of referencing

the array B, I change that B to an A.

All right, so

I'm just going to compare the i th entry of A with the j th entry of A.

The second change is a little bit more subtle.

Which is I changed the inner for loops so that the index j begins at i plus one.

Where i is the current value of the outer four loop's index,

rather than starting at the index one.

I could have had it start at the index one, that would still be correct, but

it would be wasteful and you should think about why.

If we started the inner four loops index at one, then this code would actually

compare each distinct pair of elements of A to each other twice.

Which of course is silly.

You only need to compare two different elements of A to each other once,

to know whether they're equal or not.

So this is the piece of code, the question is the same as it always is.

What in terms of big O notation and

the input link n is the running time of this piece of code?

12:35

So the answer to this question, same as the last one, big O of n squared.

That is this piece of code is also has quadratic running time.

So what I hope was clear was that whatever the running time of this piece of code is.

It's proportional to the number of iterations of this double four loop.

Like in all the examples, we do constant work per iteration.

We don't care about the constant, it gets suppressed by the Big O notation.

So all we gotta do is figure out how many iterations there

are of this double for loop.

My claim is that there's roughly n squared over two

iterations of this double four loop.

There's a couple ways to see that.

Informally we discussed how the difference between this code and the previous one,

is that instead of counting something twice, we're counting it once.

So that saves us a factor of two in the number of iterations.

Of course, this one half factor gets suppressed by the big O notation anyways.

So the big O running time doesn't change.

A different argument would just say, there's one iteration for

every distinct choice of i and j of indices between one and n.

And a simple counting argument says that there's

n choose two such choices of distinct i and j.

Where n choose two is the number n times n minus one over two.

And again suppressing lower order terms in the constant factor,

we still get a quadratic dependence on the length of the input array A.

13:52

So that wraps up some of the sort of just simple basic examples.

I hope this gets you oriented, you have a strong intuitive sense, for what Big O

notation is trying to accomplish, and how it's defined mathematically.

Let's now move on to both the mathematical development and

some more interesting algorithms.