0:10

Like Odysseus after his long wanderings,

Â we finally come back home to where we began with Taylor series.

Â Having seen a lot of strange sights along the way,

Â we come back wiser, with more experience, all the better for

Â handling whatever challenges we might face here at the end.

Â In our last lesson, we explored power series,

Â that is series of the form sum over n of a sub n x to the n.

Â And we consider this as an operation for

Â turning a sequence into a function, f(x).

Â Now, the question arises, can one invert this procedure and

Â go from the function to the sequence?

Â Well, of course you already know the answer to this.

Â It is, yes, we've been doing this all semester.

Â The answer is, of course, Taylor series.

Â Taylor expansion is really a means of turning a function

Â into a sequence of coefficients which,

Â when reconstituted into a power series, gives you the original function back.

Â In Taylor expansion, about, say, x = 0,

Â we know that these coefficients are the nth

Â derivates of f at 0 divided by n factorial.

Â But now, we can return to the issue of convergence of Taylor series.

Â Recall that at the beginning of this course I said,

Â don't worry about convergence too much.

Â Just learn how to use Taylor series, and we'll get to issues of convergence later.

Â Well, it is now later.

Â And it's now time to worry about convergence of Taylor series.

Â But now we can say something definite.

Â The following theorem is crucial.

Â If we have a power series of the form a sub n x to the n, and

Â we call that f(x), perhaps this is a Taylor expansion about zero.

Â Then, f converges absolutely for

Â values of x that are within R of zero,

Â where R is the radius of convergence,

Â the limit as n goes to infinity of a sub n over a sub n+1 in absolute value.

Â Now if this were a Taylor expansion about zero, we see that

Â what really matters is how quickly the derivatives at zero grow and get large.

Â The ratio of subsequent derivatives controls the convergence radius.

Â 3:07

Now more importantly, when you are on this interval of convergence from -R to R,

Â then not only does your function converge, but it's differentiable.

Â And the derivative is what you obtain differentiating the series term by term.

Â Moreover, f is integrable.

Â And the integral within this domain of convergence is exactly what you would get

Â if you integrate the series term by term,

Â oh, as long as you don't forget the constant.

Â Now what's so important about a result like this?

Â Well it tells us exactly what the convergence domain is and

Â allows us to manipulate the series.

Â 3:59

Let's see how that's useful.

Â We have claimed in the past what the Taylor series for arctangent looks like.

Â Knowing the derivative of arctangent,

Â we can express it as the integral of dx over 1 + x squared.

Â Using the geometric series,

Â we can see that within that domain of absolute convergence, what do we have?

Â Arctangent is really the integral of the sum over n of

Â quantity (-x squared) to the n.

Â Now, if we expand that out and then integrate term

Â by term, then we get the arctangent as the sum.

Â n goes from- to infinity of -1 to the n over

Â 2n + 1, times x to the 2n+1.

Â There's a constant of integration, but

Â that's equal to zero, since arctangent of zero is zero.

Â 5:10

That works within the domain of convergence from -1 to 1,

Â based on the geometric series.

Â Now, what happens at the end points?

Â Well, when x = -1, then it would seem to suggest

Â that you get pi over 4 as this alternating sum,

Â 1- a third + one-fifth- one-seventh.

Â Does that actually converge?

Â Well yes it does by the alternating series test.

Â However, this convergence is conditional right at the boundary.

Â 5:49

There's really no end to what one can do with this result.

Â Let's begin with the geometric series and differentiate it term by term.

Â When we do so, we obtain the sum over n of n times x

Â to the n-1 = 1 over (1-x) quantity squared.

Â Now, let's multiply through by x on both sides.

Â That takes us back to a simpler looking power series,

Â that of the sum n x to the n.

Â We see that that equals x over (1- x) quantity squared.

Â 6:31

If we evaluate this at a particular x, let's say x = one-tenth, what do we get?

Â We get that this sum,

Â one-tenth + two one-hundredths + plus three one-thousandths,

Â etc., is really equal to the right-hand side evaluated at one-tenth.

Â That is, 10 over 81.

Â This has an interesting sort of decimal expansion.

Â It's somewhat surprising that this discrete integral has

Â such a simple answer, but of course we could keep going.

Â We could differentiate once more and then multiply through again by x,

Â obtaining the result, n squared, x to the n,

Â summed over n, yields x times 1+ x, over 1- x cubed.

Â If we again evaluate this at one-tenth,

Â then we get the not very intuitive answer that one-tenth

Â + four one-hundredths + nine one-thousandths,

Â etc., = 110 divided by 729.

Â Such results are very easy to obtain.

Â 7:54

You will undoubtedly recall that Taylor series can be useful in defining new

Â functions.

Â Consider the Fresnel Integrals,

Â C(x) = the integral of cosine t squared as t goes from 0 to x.

Â And a complementary Fresnel integral,

Â S(x) that is the same but with sine instead of cosine.

Â These functions are very useful in optics and diffraction.

Â But, it's not so easy to come up with an antiderivative for

Â sine or cosine of t squared.

Â That's why these functions have special names.

Â Now you could try to graph these functions.

Â You would notice that they're oscillatory, as is expected from their form.

Â But how do you really get your hands on them?

Â Well, one way to do so would be to use a Taylor series.

Â Expand out cosine(t squared) or sine(t squared) using the familiar formulae.

Â And then, integrating term by term, substituting in the values

Â from 0 to x, gives us power series formulae for

Â these functions that are extremely useful, especially for smaller values of x.

Â 9:17

We know enough now to answer the question of when a Taylor series converges,

Â but there's one question we have not answered.

Â How do we know that what the Taylor series converges to

Â is actually the function you began with?

Â Let's say that we've expanded f(x) about x equals a.

Â Then how do we know that that Taylor expansion converges to f?

Â Well, it certainly must be the case that f is smooth,

Â that is, all of the derivatives at a exist.

Â Secondly, x must be within the domain of convergence, that is, within R of a.

Â 10:33

Some functions are not real-analytic.

Â Consider the following example.

Â Let f(x) be defined as e to the -1 over x for

Â x strictly positive and 0 otherwise.

Â Now, your first clue that something is funny is,

Â if we write out the series definition for

Â e to the -1 over x, we don't get a power series.

Â We get a power series in 1 over x.

Â That's gonna give us problems when x is near zero.

Â So there's no way that we're going to be able to compute the derivatives of this

Â function by looking at the coefficients of the series.

Â So, let's do it the old fashioned way.

Â Let's compute the derivative of f at 0 by taking the limit as

Â 11:33

h goes to 0 of f(h)- f(0) over h.

Â If we do this limit from the right, then we can evaluate f(h) as

Â e to the -1 over h.

Â Now, f(0) is, of course, 0.

Â And so we're left with the limit as h goes to 0 from the right

Â of e to the -1 over h divided by h.

Â If we change variables and let t be 1 over h, then this is the same thing

Â as the limit as t goes to to infinity of t times e to the -t.

Â Now that we know because exponential beats polynomial.

Â This is 0.

Â So I claim that with a little bit more work using a similar approach,

Â you can show that all of the derivatives of f at 0 are exactly 0.

Â That means that if you take the Taylor expansion of this function,

Â it exists, and it is precisely 0.

Â Every single coefficient in the Taylor series is 0.

Â However, the function itself is positive for

Â values of x that are strictly bigger than 0.

Â This is a smooth function, but it not is real-analytic.

Â This leads us to an image of the universe of functions,

Â beginning with the simplest functions.

Â At the very core of this universe lie the polynomials.

Â These are themselves divided or graded into different realms,

Â beginning with the constants and then the first order polynomials,

Â the quadratics, the cubics, etc.

Â Each filling out, degree by degree, larger and

Â larger subspaces of simple functions.

Â But this is not all there is,

Â since polynomials have power series that eventually terminate.

Â Beyond the space of polynomials lie those functions whose

Â Taylor series exist and converge to the functions.

Â That is the real-analytic functions, like e to the x, sine of x, cosine of x,

Â all those beautiful functions we've been working with all term.

Â However, beyond these still lie

Â other functions, more mysterious functions for

Â which Taylor expansion is not sufficient to describe them.

Â One normally doesn't run into such things.

Â That's why we haven't focused too much attention on them.

Â But you should know that they lie out beyond the real-analytics.

Â This leads us to our last image of what Taylor expansion really is.

Â Taylor expansion can be thought of as projection to

Â the space of polynomials, where computing a Taylor

Â polynomial is a projection to one of these finite subspaces.

Â Throwing away the higher ordered terms really is a form of projection.

Â