0:00

Having slogged through the formal definition of big O notation, I wanna

Â quickly turn to a couple of examples. Now, I wanna warn you up front, these are

Â pretty basic examples. They're not really gonna provide us with any insight that we

Â don't already have. But they serve as a sanity check that the big O notation's

Â doing what its intended purpose is. Namely to supress constant factors and low order

Â terms. Obviously, these simple examples will also give us some, facility with the

Â definition. So the first example's going to be to prove formally the following

Â claim. The claim states that if T(n) is some polynomial of degree "k", so namely

Â a<u>k n^k. Plus all the way up to a<u>1 N + a<u>0. For any integer "k", positive</u></u></u>

Â integer "k" and any coefficients a<u>i's positive or negative. Then: T(n) is big</u>

Â O of n^k. So this claim is a mathematical statement and something we'll

Â be able to prove. As far as, you know, what this claim is saying, it's just saying big O

Â notation really does suppress constant factors and lower order terms. If you have a

Â polynomial then all you have to worry about is what is the highest power in that

Â polynomial and that dominates its growth as "n" goes to infinity. So, recall how one

Â goes about showing that one function is big O of another. The whole key is to find

Â this pair of constants, c and n<u>0, where c quantifies the constant multiple</u>

Â of the function you're trying to prove big O of, and n<u>0 quantifies what you mean</u>

Â by "for all sufficiently large n." Now, for this proof, to keep things very simple to

Â follow, but admittedly a little mysterious, I'm just gonna pull these

Â constants, c and n<u>0, out of a hat. So, I'm not gonna tell you how I derived them,</u>

Â but it'll be easy to check that they work. So let's work with the constants n<u>0</u>

Â equal to one, so it's very simple choice of n<u>0 and then "c" we are gonna pick to</u>

Â be sum of the absolute values of the coefficients. So the absolute value of "a<u>k"</u>

Â plus the absolute value of "a<u>(k-1)", and so on. Remember I didn't assume that</u>

Â the pol..., the original polynomial, had non-negative coefficients. So I claim

Â these constants work, in the sense that we'll be able to prove to that, assert,

Â you know, establish the definition of big O notation. What does that mean? Well we

Â need to show that for all "n" at least one (cause remember we chose n<u>0 equal to</u>

Â one), T(n) (this polynomial up here) is bounded above by "c" times "n^k",

Â where "c" is the way we chose it here, underlined in red. So let's just check why

Â this is true. So, for every positive integer "n" at least one, what do we need

Â to prove? We need to prove T(n) is upper bounded by something else. So we're gonna start on

Â the left hand side with T(n). And now we need a sequence of upper bounds

Â terminating with "c" times "n^k" (our choice of c underlined in red). So T(n)

Â is given as equal to this polynomial underlined in green. So what happens when

Â we replace each of the coefficients with the absolute value of that coefficient?

Â Well, you take the absolute value of a number, either it stays the same as it was

Â before, or it flips from negative to positive. Now, "n" here, we know is at least

Â one. So if any coefficient flips from negative to positive, then the overall

Â number only goes up. So if we apply the absolute value of each of the coefficients we

Â get an only bigger number. So T(n) is bounded above by the new polynomial

Â where the coefficients are the absolute values of those that we had before. So why was

Â that a useful step? Well now what we can do is we can play the same trick but with "n".

Â So it's sort of annoying how right now we have these different powers of "n".

Â It would be much nicer if we just had a common power of "n", so let's just replace

Â all of these different "n"s by "n^k", the biggest power of "n" that shows up anywhere.

Â So if you replace each of these lower powers of "n" with the higher power "n^k",

Â that number only goes up. Now, the coefficients are all non negative so the

Â overall number only goes up. So this is bounded above by "the absolute value of a<u>k" "n^k"</u>

Â ...up to "absolute value of a<u>1" "n^k" ...plus "a<u>0" "n^k".</u></u>

Â I'm using here that "n" is at least one, so higher powers of "n" are only bigger. And

Â now you'll notice this, by our choice of "c" underlined in red, this is exactly equal

Â to "c" times "n^k". And that's what we have to prove. We have to prove that

Â T(n) is at most "c" times "n^k", given our choice of "c" for every "n" at least one.

Â And we just proved that, so, end of proof. Now there remains the

Â question of how did I know what the correct, what a workable value of "c" and "n<u>0"</u>

Â were? And if you yourself want to prove that something is big O of something else,

Â usually what you do is you reverse engineer constants that work. So you would

Â go through a proof like this with a generic value of "c" and "n<u>0" and then</u>

Â you'd say, "Ahh, well if only I choose "c" in this way, I can push the proof through." And

Â that tells you what "c" you should use. If you look at the optional video on further

Â examples of asymptotic notation, you'll see some examples where we derive the

Â constants via this reverse engineering method. But now let's turn to a second

Â example, or really I should say, a non-example. So what we're going to prove now

Â is that something is not big O of something else. So I claim that for every

Â "k" at least 1, "n^k" is not O(n^(k-1)). And again, this is

Â something you would certainly hope would be true. If this was false, there'd be

Â something wrong with our definition of big O notation and so really this is just to

Â get further comfort with the definition, how to prove something is not big O of

Â something else, and to verify that indeed you don't have any collapse of distinctive

Â powers of ploynomials, which would be a bad thing. So how would we prove that

Â something is not big O of something else? The most...frequently useful proof method

Â is gonna be by contradiction. So, remember, proof by contradiction, you

Â assume what you're trying to, establish is actually false, and, from that, you do a

Â sequence of logical steps, culminating in something which is just patently false,

Â which contradicts basic axioms of mathematics, or of arithmetic. So,

Â suppose, in fact, n^k was big O of n^(k-1), so that's assuming

Â the opposite of what we're trying to prove. What would that mean? Well, we just

Â referred to the definition of Big O notation. If in fact "n^k"

Â hypothetically were Big O of n^(k-1) then by definition

Â there would be two constants, a winning strategy if you like, "c" and "n<u>0" such</u>

Â that for all sufficiently large "n", we have a constant multiple "c" times "n^(k-1)"

Â upper bounding "n^k". So from this, we need to derive something

Â which is patently false that will complete the proof. And the way, the easiest way to

Â do that is to cancel "n^(k-1)" from both sides of this inequality. And

Â remember since "n" is at least one and "k" is at least one, it's legitimate to cancel

Â this "n^(k-1)" from both sides. And when we do that we get the

Â assertion that "n" is at most some constant "c" for all "n" at least "n<u>0". And this now</u>

Â is a patently false statement. It is not the case that all positive integers are

Â bounded above by a constant "c". In particular, "c+1", or the integer right

Â above that, is not bigger than "c". So that provides the contradiction that shows that

Â our original assumption that "n^k" is big O of "n^(k-1)" is false. And

Â that proves the claim. "n^k" is not big O of "n^(k-1)", for

Â every value of "k". So different powers of polynomials do not collapse. They really

Â are distinct, with respect to big O notation.

Â