0:00

Welcome to Calculus. I'm professor Greist.

And we're about to begin Lecture 9, Bonus Material.

In our main lesson, we considered growth rates of functions and ordered them in a

hierarchy, as x goes to infinity. This led to a language for discussing and

controlling that growth, both as x goes to infinity, and as x goes to 0.

This language is called Big-O. Before we get to that, there's one thing

that we need to reconsider a little bit. One of the things we discussed was the

factorial function or factorial growth. But what exactly do we mean by factorial

functions? What is x factorial when x is not an

integer? Well there is a real definition for it.

It's going to look a little complicated, the definition of x factorial Is the

integral as t goes from zero to infinity, t to the x times e to the minus t dt.

Oh, what in the world is that? Well, we're not going to explain that

now. We'll get to that later in this course,

maybe in Chapter Three. But, for the moment, one thing to know is

that this definition satisfies the property that factorials have, namely

that x factorial equals x times quantity x minus 1 factorial, at least for x

positive. And also, n factorial for an integer

agrees with this definition. Now, I'm not going to prove that now,

we'll talk about that later in the course.

What I want to argue is that this factorial growth beats exponential

growth, in the case where x is an integer, a positive integer.

Then, we get x factorial being the familiar, x times x minus 1 times x minus

2, et cetera. All the way down.

If we look at e to the x, well, this is some number of copies of e multiplied

together. Now for values of x hm, very large, we

see that terms in the numerator and terms in the denominator pair perfectly.

And almost all of the terms in the numerator are much bigger than the terms

in the denominator. And this means that in the limit, as x

goes to infinity, we get something that is infinite.

That means that x factorial grows much faster than e to the x.

If you're curious about this unusual sort of definition, look up information about

the gamma function. We'll return to that a little bit later

in this course. What I'd like to spend some time on is a

bit of discussion about Big-O. How is this language used?

Why is this language Used. It is ubiquitous in Applied Mathematics.

And you're going to see it at various points throughout this course.

So let's take some time and review, and extend what we know.

Recall that a function f is said to be in Big-O in the function g.

If, in the appropriate limit, f, in absolute value is less than g in absolute

value times some constant c, we don't care about what the constant is, we just

want to know that there is some upper bound.

Again, this holds in the limit as x is going to wherever were interested in

looking at. Maybe x is going to zero or maybe x is

going to infinity. One of the reasons Big-O is so useful is

that it forms an efficient language. For example, f is in Big-O of 1 is means

the same thing, as saying that f is bounded in the limit, as x is going to

zero or infinity or whatever, there are other applications as well, real

applications. For example, in computer science one is

often interested in specifying how much effort it takes to execute a certain

algorithm. This is the subject of computational

complexity. Consider the following.

Let's say you have two n digit numbers and you want to multiply them together.

How much work are you going to have to do?

How many individual mathematical computations is that going to take?

Well, let's consider what it takes to multiply two numbers together using the

standard algorithm that you learned in primary school.

And let's count this as a function of n. And consider what happens as n is going

to infinity. To multiply these two numbers together I

need to take each digit of the second number.

And multiply it by the entire first number.

That is n elementary mathematical operations?

Well, maybe it's not n. Maybe we should call it 2n because of the

fact that you have to multiply 6 times 3, 18, write the 8, carry the 1, et cetera.

We'll pick some constant times n. But we need to do that for each of the n

digits in the second number. Now, we're still not done because we need

to take all of those resulting products, and add them together.

And I'll let you try to figure out how much effort is involved in that addition.

It's, you know, some number times n, times [INAUDIBLE] maybe n, maybe n minus

1, I don't know. I'm not a computer scientist.

I'm a mathematician. But neither I nor the computer scientist

needs to worry about these exact constants.

If all we care about is what happens as n is going to infinity, we can say that

this algorithm is in Big-O of n squared. It's going to take a quadratic amount of

effort. So, as your numbers get bigger and

bigger, you're doing a lot more work to multiply them together.

Is there an algorithm that is better, that is, say, n times log n?

Well that is what Big-O is good for. Error analysis in engineering, physics,

other sciences, is another excellent application.

Let's say that you know the kinetic energy of your body is one half the mass

times the velocity squared, but let's say that your measurement of the velocity has

some error associated with it. Epsilon, we'll call that error.

And we'll assume that epsilon is very small.

What is the induced error in our kinetic energy?

Well, of course, if we expand that out, one half mv squared plus m times v times

epsilon plus something that is in Big-O of epsilon squared, what we really care

about is the leading-order term in epsilon.

That is going to dictate what the error to the kinetic energy is, and we can see

that that is really the momentum, m times v in this case.

Now, other examples are relevant to what we began this lesson with.

Let's consider x factorial. We argued that its growth rate is pretty

big, that x factorial is not in Big-O of e to the x.

But we can say more is a wonderfully useful and powerful formula in

mathematics called Stirling's Formula that says exactly how fast x factorial

grows. One form of this says that log of x

factorial equals x times log of x minus x plus some remainder term that is in Big-O

of log of x. Perhaps the stronger version of

Stirling's Formula will be a bit more transparent.

It says that x factorial equals square root of 2 pi x times quantity x over e to

the x power times 1 plus something in Big-O of 1 over x.

Now you don't need to memorize either of these.

They're wonderfully useful in probability in combinatorics in the lots of areas.

But what I want you to see is that the appropriate language for expressing how

fast x factorial grows as x goes to infinity is Big-O.

Pure mathematics uses Big-O as well. If we define the function pi of x to be

the number of prime numbers less than x. Let's restrict to the positive primes of

course and x is going to be, let's say, integer valued.

Doesn't matter. Then, one thing we could say is that the

number of primes is Big-O of x. But that's a vacuous statement.

Not every integer is prime. We'd like some idea of how many primes

there are. Well, of course there are infinitely many

primes. But we'd like to know a bit finer

information. What kind of density do the prime numbers

have within the natural numbers? Well.

we can use the language of Big-O to state the prime number theorem, which says that

pi of x is to leading order x over log of x.

That's not exactly what it is, it's just the leading order term, but pi of x is x

over log x times quantity 1 plus something in Big-O, 1 over log of x.

And that's one example of how this language appears in everything from pure

mathematics to applied mathematics. To the physical and computational

sciences. It'll take a little while to get used to

this language, and it's going to take a bit of practice.

Most students find it a bit unlike things that they've learned in the past, because

of this arbitrary constant. But do a few practice problems and you'll

see Big-O cropping up later in the course.