0:35

As I mentioned in the first few lectures,

Â the big O notation is really not adequate for this.

Â If you say big O(log N), it doesn't give you an accurate measure of the quantity.

Â It's an upper bound, and it's within a constant factor, and

Â there's no way to get a precise estimate of what the quantity is.

Â If we had to take a definition like this, say H sub N, for

Â the harmonic numbers, that one's accurate, but it's not concise.

Â It's going to take time to compute the exact value that you want.

Â Now, that's not too bad, but still the spirit of we're talking

Â about is to try to get accurate and concise expressions like this one.

Â Natural log N + gamma + big O of 1 over N.

Â So that one, for large values of N, will give a numerical result

Â that's very close to the quantity that we're interested in studying.

Â And we'll see lots and lots of examples, but that's the basic goal.

Â We want a concise and

Â accurate estimates of the quantities we're interested in studying.

Â Now we won't go crazy with defining what concise means,

Â what I mean by it is I've got standard functions and I've got constants that

Â are maybe known and I want to be able to compute this value for large N.

Â It's as simple as that.

Â I want to write a program or use a calculator to compute the value.

Â And actually, that kind of definition,

Â it's easy to understand the motivation for Scientists in the 18th and

Â 19th centuries, who were learning more about mathematical models of the world and

Â coming up with functions that describe whatever their interests in studying.

Â They wanted to be able to do calculations in order to be able to compare

Â their hypotheses with what goes on in the natural world.

Â And without asymptotics, it would be hard to do so

Â because without computers you definitely need to be able to compute your answer.

Â And that's always a good perspective to have

Â in the back of our minds when we're thinking about asymptotics.

Â So this is just a reminder I talked about this notation earlier on.

Â So the big O notation for upper bounds.

Â If you say g of N = big O of N.

Â That means the absolute value of the ratio is bounded from above as N

Â goes to infinity.

Â And we're going to use that notation for error terms,

Â but not for our leading terms.

Â Also we use the little o notation which says that g of N is little o f(N).

Â If their ratio tends to 0 as N approaches infinity.

Â So that's gN asymptotically smaller than f(N).

Â And again, we use that for error terms,

Â in fact, often we use the so-called tilde notation,

Â which just says that g(N) and f(N) ratio approaches 1 as N approaches infinity.

Â That's the weakest nontrivial little o.

Â 3:52

So we use those notations to come up with approximations.

Â So, if we say that g(N) = f(N) + big O(h(N)),

Â it means the error will be within at most a constant factor of h(N) as N increases.

Â A little o means that we know the error will decrease as N increases.

Â And that's good, that means that for larger N, we get a better result.

Â And tilde again is the weakest, nontrivial little o.

Â As N increases we expect a better result and

Â 4:26

that's basic approximations that we're going to be trying to develop.

Â So now with that background, what we're

Â looking at is developing a series of functions.

Â And if we have a series of functions gk with g k + 1 = little o(gk),

Â so that's an asymptotically decreasing series of functions.

Â Then if we write fN as a linear combination of those as they decrease,

Â we call that an asymptotic expansion of the function f,

Â and since the function's decreased,

Â the expansion is supposed to get more accurate as we add more and more terms.

Â Precisely, it represents the collection of formula, f(N) is big O of g sub 0.

Â It's also c0, g0, plus big O of g sub 1, and so forth.

Â And we can pick off of this list of formulas the one that suits our

Â purposes best in terms getting an accurate estimate of the quantity we're looking at.

Â And this will become more clear when we look at specific examples.

Â So, we're using big 0 notation but then in the specific technical sense, we want to

Â be able to be ensured that we can get more accurate asymptotic estimates if needed.

Â 5:54

So the standard scale we use the functions gk, we use powers of N and log N.

Â And maybe log N and log log N in exponentials.

Â So those are the standard functions that we want to use and

Â we'll see it's very easy to express many of the functions that arise

Â in scientific studies in terms of the standard scale, and

Â it's not necessary to give a detailed definition of this.

Â So typically what happens is we only use a few terms.

Â Maybe two, three or four terms, the second,

Â third or fourth equation from this.

Â When the unused terms are extremely small that's when we stop.

Â Because we have the big O estimate that says we're within a constant of that

Â unused term and it's extremely small relatively that's when we stop.

Â 6:51

[COUGH] So we use the tilde notation if we don't want to bother carrying around

Â the big O information, and a lot of times that simplifies the calculations and

Â there's no reason not to.

Â We usually check our asymptotic estimates against the actual values

Â to make sure that what we have is giving us the accuracy that we want.

Â And if we do mathematically want to specify information on the unused

Â terms we go ahead and do that using the big O notation or the little o notation.

Â But the main point is that the methods we use, in principle,

Â should extend to any desired precision.

Â If we need more terms, we can get them.

Â And that's a very big difference from the use of the big O notation in

Â the theory of algorithms where it's both expressing an upper bound and

Â capturing the concept of the worst case.

Â This is more in the spirit of science, the origins are clear in the 18th and

Â 19th centuries, where people wanted to be able to calculate things and

Â then compare those with the results of scientific experiments.

Â That's what we want to do in the analysis of algorithms, and

Â that's why we embrace all of this classical mathematics.

Â 8:45

And so, what we can do with asymptotics is take

Â pretty complicated theorem statement that I'll show in just a second and

Â reduce it down to actually a pretty general result.

Â The coefficient of cb vn in the two ratio of the two polynomials is

Â asymptotic to a constant times beta vn into the new minus

Â 1 where beta is the smallest modulus root of

Â the polynomial in the denominator and mu is its multiplicity.

Â So this is just picking the leading term off of the more detailed theorem.

Â So in lecture two, I gave this detailed theorem that showed that

Â the coefficient of z to the n in that ratio,

Â there's a term corresponding to each 0 of g of z.

Â And then according to its multiplicity, and with asymptotics,

Â we cannot worry about all the smaller terms in that sum and

Â just pick out the largest one in a precise technical sense.

Â So for example, if the roots are three and

Â two then it may be 3 to the N and 3 to the N plus 2 to the N and

Â you can see as N gets even to 11 they're pretty close,

Â and when N gets very high they're going to get closer and closer.

Â So, usually the pole of smallest modulus really dominates.

Â So no question 3 to the N plus 2 to the N is asymptotic 3 to the N.

Â You can forget about the 2 to the n for large N.

Â And [COUGH] in fact the convergence is exponentially fast,

Â as N gets large even by 1, it gets closer and

Â closer to being accurate.

Â The ratio gets closer and closer to 1.

Â So usually that pull of smallest modulus that the place where the denominator

Â goes 0, closest to the origin that's the one that really matters, and

Â I've emphasized this because this particular scenario turns out to be

Â very important in a general context later on in analytic combinatorics.

Â 11:14

Now there are situation depending on what these polynomials are where the polls

Â are close together or the multiple polls very close to the one with small modulus.

Â And in those kinds of cases,

Â we have to figure out how to extract the leading terms.

Â But still it's very important to realize that we don't need to carry around

Â the small ones.

Â So sure, if one of the routes is 2 and the other one is one-half, and the other one

Â is 1 over 1.99999, it's not going to be that close should be off by a factor of 2.

Â I'd try to throw that one away.

Â But it seems easy to figure that out, and

Â it's important to know that it's easy to throw away the small ones.

Â So here's how analysis would go for a recurrence,

Â say like one of the earlier recurrences that started out with.

Â A sub N = 5 aN- 1- 6a- 2 when a0 and a1 = 1.

Â So we only worry about the root

Â of g that's smallest.

Â So to solve the recurrence, we make it valid for all n.

Â Then we multiply by z and sum on n to get polynomial and

Â that gives us, for the generating function, a ratio of two polynomials.

Â And what we're interested in is the coefficient of z to the n in that

Â generating function and now we can just plug in chug in the theorem.

Â The smallest root of the denominator is one-third so

Â it's going to be asymptotic to 3 to the n.

Â And then we go ahead and calculate the constant.

Â And if you plug in the values for

Â the constant in that formula, you just get one.

Â 13:04

And if you want to apply the same thing to, say, the recurrence for

Â the Fibonacci numbers, again the same steps are going to work.

Â In that case the constant will be one over square root of 5, and

Â it'll be phi to the n, the golden ratio to the n.

Â The extra term in that case is phi hat which is less than 1 and

Â totally negligible.

Â So, with asymptotics, we get a relatively simple general theorem

Â that gives us a precise and concise result for a big family of problems.

Â 13:53

It's an expansion through power series and Taylor's theorem.

Â They're infinite but since they converge,

Â they can stop at any point to give results like this.

Â In principle, we can take as many terms as we want off the infinite series.

Â And then [COUGH] we have a asymptotic series in the standard scale, so

Â again, these are just immediate from Taylor's theorem where the term of x and

Â n over n factorials the nth derivative of the function.

Â 14:26

So we looked at all of these when we talked about expanding generating

Â functions and the difference now is with asymptotics we're only interested in

Â taking couple of terms for the purpose of being able to accurately compute values.

Â So those are standard examples.

Â Now these are Taylor's theorem for x goes to 0 actually

Â we're usually interested in coefficient zbn as n increases.

Â So what we'll do is just substitute 1 over N in all of these formulas,

Â 15:00

to get asymptotic expansions.

Â And these are in maybe more familiar terms,

Â if you wanted to compute e to the 1 over N, say for N equals a million,

Â this will tell you it's going to be pretty close to 1.

Â It'll 1 plus 1 over 1 million plus 1 over 2 times a million squared.

Â I'm not going to be worth trying

Â to compute that any more accurately than that.

Â This will give very great accuracy just with a few turns.

Â Same log of 1 over 1+1 over N for if it equals a millionth,

Â that's pretty close to 1 millionth.

Â The next term you won't notice until 12 decimal places out, and

Â the next one 18 decimal places out.

Â If you just want it to a few decimal places, use 1 over a million,

Â that's the whole idea of asymptotics.

Â And binomial has, this is for

Â k constant, will have a similar kind of character.

Â Now, this gets more complicated if k grows within and

Â that would be one of the things that we'll talk about later on.

Â And the one that we used really most often, is geometric.

Â So, anyway that's what you get when you plug in 1 over N,

Â in the straight geometric series.

Â 1 over N minus what it says is that 1 over N minus 1, is really close to 1 over N.

Â 17:04

A problem in asymptotics, we specify the function and then we specify how

Â accurately we want to estimate it, and so these two problems,

Â it's definitely worthwhile just working for a second on those.

Â And if you go ahead and just plug in from before on our previous slide,

Â log n 1 plus 1 over N is 1 over N minus 1 over 2N squared plus big O 1 over N cubed.

Â And log of n 1 minus 1 over N is a minus 1 over N and so

Â the 1 over N terms cancel, and then you've got 2, 1 over 2N squared terms,

Â which makes it just -1 over N squared plus big O 1 over N cubed.

Â So right away, you can see the log of 1+1 over N is a rather complicated function,

Â but we can approximate it very accurately as just minus 1 over N squared and

Â that's going to be quite accurate for N in the practical ranges of interest.

Â If it's minus, then what happens is the 1 over N squared terms cancel out

Â we're just left with 2 over N.

Â So that's just simple, simple example of asymptotic

Â expansions using the basic information that we get from Taylor's theorem.

Â And then combining those results really just using algebra.

Â