0:10

Like Odysseus after his long wanderings,

we finally come back home to where we began with Taylor series.

Having seen a lot of strange sights along the way,

we come back wiser, with more experience, all the better for

handling whatever challenges we might face here at the end.

In our last lesson, we explored power series,

that is series of the form sum over n of a sub n x to the n.

And we consider this as an operation for

turning a sequence into a function, f(x).

Now, the question arises, can one invert this procedure and

go from the function to the sequence?

Well, of course you already know the answer to this.

It is, yes, we've been doing this all semester.

The answer is, of course, Taylor series.

Taylor expansion is really a means of turning a function

into a sequence of coefficients which,

when reconstituted into a power series, gives you the original function back.

In Taylor expansion, about, say, x = 0,

we know that these coefficients are the nth

derivates of f at 0 divided by n factorial.

But now, we can return to the issue of convergence of Taylor series.

Recall that at the beginning of this course I said,

don't worry about convergence too much.

Just learn how to use Taylor series, and we'll get to issues of convergence later.

Well, it is now later.

And it's now time to worry about convergence of Taylor series.

But now we can say something definite.

The following theorem is crucial.

If we have a power series of the form a sub n x to the n, and

we call that f(x), perhaps this is a Taylor expansion about zero.

Then, f converges absolutely for

values of x that are within R of zero,

where R is the radius of convergence,

the limit as n goes to infinity of a sub n over a sub n+1 in absolute value.

Now if this were a Taylor expansion about zero, we see that

what really matters is how quickly the derivatives at zero grow and get large.

The ratio of subsequent derivatives controls the convergence radius.

3:07

Now more importantly, when you are on this interval of convergence from -R to R,

then not only does your function converge, but it's differentiable.

And the derivative is what you obtain differentiating the series term by term.

Moreover, f is integrable.

And the integral within this domain of convergence is exactly what you would get

if you integrate the series term by term,

oh, as long as you don't forget the constant.

Now what's so important about a result like this?

Well it tells us exactly what the convergence domain is and

allows us to manipulate the series.

3:59

Let's see how that's useful.

We have claimed in the past what the Taylor series for arctangent looks like.

Knowing the derivative of arctangent,

we can express it as the integral of dx over 1 + x squared.

Using the geometric series,

we can see that within that domain of absolute convergence, what do we have?

Arctangent is really the integral of the sum over n of

quantity (-x squared) to the n.

Now, if we expand that out and then integrate term

by term, then we get the arctangent as the sum.

n goes from- to infinity of -1 to the n over

2n + 1, times x to the 2n+1.

There's a constant of integration, but

that's equal to zero, since arctangent of zero is zero.

5:10

That works within the domain of convergence from -1 to 1,

based on the geometric series.

Now, what happens at the end points?

Well, when x = -1, then it would seem to suggest

that you get pi over 4 as this alternating sum,

1- a third + one-fifth- one-seventh.

Does that actually converge?

Well yes it does by the alternating series test.

However, this convergence is conditional right at the boundary.

5:49

There's really no end to what one can do with this result.

Let's begin with the geometric series and differentiate it term by term.

When we do so, we obtain the sum over n of n times x

to the n-1 = 1 over (1-x) quantity squared.

Now, let's multiply through by x on both sides.

That takes us back to a simpler looking power series,

that of the sum n x to the n.

We see that that equals x over (1- x) quantity squared.

6:31

If we evaluate this at a particular x, let's say x = one-tenth, what do we get?

We get that this sum,

one-tenth + two one-hundredths + plus three one-thousandths,

etc., is really equal to the right-hand side evaluated at one-tenth.

That is, 10 over 81.

This has an interesting sort of decimal expansion.

It's somewhat surprising that this discrete integral has

such a simple answer, but of course we could keep going.

We could differentiate once more and then multiply through again by x,

obtaining the result, n squared, x to the n,

summed over n, yields x times 1+ x, over 1- x cubed.

If we again evaluate this at one-tenth,

then we get the not very intuitive answer that one-tenth

+ four one-hundredths + nine one-thousandths,

etc., = 110 divided by 729.

Such results are very easy to obtain.

7:54

You will undoubtedly recall that Taylor series can be useful in defining new

functions.

Consider the Fresnel Integrals,

C(x) = the integral of cosine t squared as t goes from 0 to x.

And a complementary Fresnel integral,

S(x) that is the same but with sine instead of cosine.

These functions are very useful in optics and diffraction.

But, it's not so easy to come up with an antiderivative for

sine or cosine of t squared.

That's why these functions have special names.

Now you could try to graph these functions.

You would notice that they're oscillatory, as is expected from their form.

But how do you really get your hands on them?

Well, one way to do so would be to use a Taylor series.

Expand out cosine(t squared) or sine(t squared) using the familiar formulae.

And then, integrating term by term, substituting in the values

from 0 to x, gives us power series formulae for

these functions that are extremely useful, especially for smaller values of x.

9:17

We know enough now to answer the question of when a Taylor series converges,

but there's one question we have not answered.

How do we know that what the Taylor series converges to

is actually the function you began with?

Let's say that we've expanded f(x) about x equals a.

Then how do we know that that Taylor expansion converges to f?

Well, it certainly must be the case that f is smooth,

that is, all of the derivatives at a exist.

Secondly, x must be within the domain of convergence, that is, within R of a.

10:33

Some functions are not real-analytic.

Consider the following example.

Let f(x) be defined as e to the -1 over x for

x strictly positive and 0 otherwise.

Now, your first clue that something is funny is,

if we write out the series definition for

e to the -1 over x, we don't get a power series.

We get a power series in 1 over x.

That's gonna give us problems when x is near zero.

So there's no way that we're going to be able to compute the derivatives of this

function by looking at the coefficients of the series.

So, let's do it the old fashioned way.

Let's compute the derivative of f at 0 by taking the limit as

11:33

h goes to 0 of f(h)- f(0) over h.

If we do this limit from the right, then we can evaluate f(h) as

e to the -1 over h.

Now, f(0) is, of course, 0.

And so we're left with the limit as h goes to 0 from the right

of e to the -1 over h divided by h.

If we change variables and let t be 1 over h, then this is the same thing

as the limit as t goes to to infinity of t times e to the -t.

Now that we know because exponential beats polynomial.

This is 0.

So I claim that with a little bit more work using a similar approach,

you can show that all of the derivatives of f at 0 are exactly 0.

That means that if you take the Taylor expansion of this function,

it exists, and it is precisely 0.

Every single coefficient in the Taylor series is 0.

However, the function itself is positive for

values of x that are strictly bigger than 0.

This is a smooth function, but it not is real-analytic.

This leads us to an image of the universe of functions,

beginning with the simplest functions.

At the very core of this universe lie the polynomials.

These are themselves divided or graded into different realms,

beginning with the constants and then the first order polynomials,

the quadratics, the cubics, etc.

Each filling out, degree by degree, larger and

larger subspaces of simple functions.

But this is not all there is,

since polynomials have power series that eventually terminate.

Beyond the space of polynomials lie those functions whose

Taylor series exist and converge to the functions.

That is the real-analytic functions, like e to the x, sine of x, cosine of x,

all those beautiful functions we've been working with all term.

However, beyond these still lie

other functions, more mysterious functions for

which Taylor expansion is not sufficient to describe them.

One normally doesn't run into such things.

That's why we haven't focused too much attention on them.

But you should know that they lie out beyond the real-analytics.

This leads us to our last image of what Taylor expansion really is.

Taylor expansion can be thought of as projection to

the space of polynomials, where computing a Taylor

polynomial is a projection to one of these finite subspaces.

Throwing away the higher ordered terms really is a form of projection.