0:00

Welcome to calculus.

Â I'm professor Ghrist, and we're about to begin lecture 27 on improper integrals.

Â The fundamental theorem of integral calculus is great, but

Â it's not without its limitations.

Â In this lesson, we'll consider what happens when we encounter a difficulty

Â with limits in a definite integral.

Â The fundamental theorem of integral calculus is great, but

Â it does have its limitations.

Â There are a few things that you must be careful about.

Â 0:49

And within this statement lies two dangers.

Â The first is that of continuity.

Â A discontinuous integrand can cause problems.

Â Here's an example.

Â Consider the integral as x goes from -1 to 1 of (1/(x squared))dx.

Â If we simply apply the fundamental theorem and

Â take the antiderivative, negative 1 over x, and

Â evaluate that at 1, what do we get?

Â -1.

Â Then we subtract what we get when we evaluate at -1.

Â -1- 1= -2.

Â Perfect.

Â Except for the fact that our integrand is 1 over x squared.

Â All those terms are positive, and if we go to the definition

Â of the definite integral as a Riemann sum, adding up the values,

Â there's no way that we can add up positive values to get a negative integral.

Â The problem is this integrand is not continuous.

Â It's not even defined at x equals zero.

Â The second hypothesis is that of having

Â an interval from a to b, a finite interval.

Â Consider the integral as X goes from negative infinity

Â to positive infinity of (2x/(1+x squared)) dx.

Â Clearly, the antiderivative is log of (1+x squared).

Â What happens when we evaluate this at the limits?

Â You might think we get infinity minus infinity, which is not defined.

Â Or you might think, well, this is an odd integrand

Â over a symmetric domain, therefore it must be zero, which is correct.

Â These improper integrals are dangerous.

Â 3:04

In all cases, we're going to use the technique of taking

Â a limit to make sense of these integrals.

Â There are two cases.

Â The first we might call a blow-up.

Â This is what happens when you have an integrand

Â that is not well defined at some point.

Â Let's say at one of the endpoints, A.

Â In this case, the way to evaluate the limit is to integrate

Â from some constant, t, to the other endpoint.

Â And then take a limit as T goes to the singular input.

Â 4:14

The following class of examples are crucial to this subject.

Â These are the p integrals, or

Â the integral of ((1/x) to the p)dx.

Â Let's consider first the example of a tail singularity,

Â where we integrate, lets say from X equals one to infinity.

Â Of course the value is going to depend on P, but lets do all of them at once.

Â If we integrate (x to the -p)dx, that is easy enough.

Â That's going to equal, as long as p is not equal to 1,

Â (x to the 1- p)/(1- p).

Â We need to evaluate this at the limit x goes from one to t.

Â And then take the limit as t goes to infinity.

Â That is, we're taking the limit.

Â This T goes to infinity of (T to the 1- p)

Â / (1- p)- (1 / (1- p)).

Â Now this limit is going to be infinite sometimes.

Â But when p is bigger than 1, then the first term goes to 0 and

Â we're left with 1 / (p- 1).

Â If p is less than one, then that first term dominates and

Â goes to infinity, and we say the integral diverges.

Â 6:19

Let's do it the other way and look at a blow-up instead of a tail singularity.

Â Consider the same integrand x to the -p, but

Â now integrated as x goes from zero to one.

Â Well, doing the integral yields the same anti-derivative,

Â we just need to evaluate our limits differently.

Â So, evaluating as x goes from t to one, then taking a limit as T goes to zero.

Â This will break up into two cases when p is not equal to one, or

Â when p is equal to one.

Â 6:59

In these cases, again, we get sometimes convergence, sometimes divergence.

Â But note what is happening to the p's.

Â When p is bigger then one, we get a divergent integral.

Â When p is less than one, then the (T to

Â the 1-p)/(1-p) term drops out and

Â we're left with an answer of 1/(1-p).

Â Again, in the case where p equals one,

Â log of T is not gonna be convergent as T goes to zero.

Â 7:39

Let's summarize what we've found with these p integrals.

Â If we integrate along a tail going to infinity,

Â then the p integral converges when p is strictly bigger than one.

Â It diverges when p is less than or equal to one.

Â For a blow up singularity at zero, these are reversed.

Â And we get a convergent integral when p is less than one, and

Â a divergent integral when p is bigger than or equal to one.

Â Note, that at the particular value of one, it's always divergent.

Â No matter whether you're going from zero to one or from one to infinity.

Â Now you don't need to remember the actual values of the convergent p integrals.

Â But you do need to remember this chart, this listing

Â of when the integral converges and when is diverges.

Â And the reason you need to remember this is because it will help

Â you determine convergence or divergence of other integrals.

Â Integrals whose anti-derivatives who may not be so easy to compute.

Â Let's look at an example.

Â Consider the integral of dx/(square root of x

Â squared + x) as x goes from zero to one.

Â This is a finite domain, however there is a singularity, or

Â a blow up, at x equals zero.

Â So how shall we proceed?

Â I don't know the anti-derivative to this.

Â It doesn't look like it's going to be terribly easy.

Â So, let us consider what the leading order behavior for this integrand is.

Â We're going to think in terms of Taylor series.

Â Now, this integrand doesn't have a well-defined Taylor series at x equals

Â zero, since the function is not even defined, and it blows up.

Â But notice that if we factor out a square root

Â of x from the denominator, then we're left with

Â 1/(square root of x+1) as a factor.

Â Now let's rewrite that,

Â thinking that we are going to be looking at what happens as x is near zero.

Â I can write this as (x to the -1/2) times quantity (1+x) to the -1/2.

Â And this then is helpful, why?

Â Because the binomial series says that whenever you have (1+x) to the alpha,

Â as x is small, then this is of

Â the form one plus something in big O of x.

Â Now if we apply that to the (1+x) to the -1/2 term,

Â then we get the integral as x goes zero to one of x to the -1/2

Â times quantity 1 + something in O(x).

Â And we see that the leading order term

Â in this integrand is x to the -1/2.

Â So I'm going to split this up into two integrals.

Â The first is a p integral with p equals 1/2.

Â The second is an integral whose precise form I haven't written down,

Â but it's something that is in O(square root of x), and indeed, is bounded.

Â And I'm integrating it over the domain from zero to one.

Â The integral on the right definitely converges.

Â What about the integral on the left?

Â Well remember, I said you had to memorize some of these.

Â Let's go back and recall that when p equals one half,

Â for a blow up p integral, it converges.

Â Therefore, we know that this entire integral converges.

Â We may not know the value, but we know it converges.

Â 12:10

Now, what happens if we take that same integrand and

Â integrate as x is going to infinity?

Â For very large x, we need to

Â consider that x squared term in the denominator as the lead.

Â Therefore, factoring out, we get (1/(square root of x

Â squared)) times (1/(square root of 1+(1/x))).

Â To simplify that a little bit, we see

Â that we have again something to which the binomial series applies.

Â Namely, (1+(1/x)) to the -1/2.

Â Expanding that out gives (1 + something in O(1/x)).

Â And now, splitting this up into two integrals,

Â we see that the leading order term Is a p integral, with p equals one.

Â And I don't think I need to remind you that when p equals one,

Â you always have divergence.

Â Therefore, because one piece of this integral diverges,

Â the entire integral diverges.

Â Same integrand, different behavior as you go to infinity.

Â 13:52

Let's see what happens in this case.

Â We already know that the anti-derivitave is log of quantity (1 + x squared).

Â And if we evaluate that at negative T and T, well,

Â we get something that cancels to zero.

Â And so it would seem as though our limit is zero.

Â That is not correct, and we have failed to be careful.

Â What do we need to do?

Â We need to take two independent limits.

Â One, as say, s, is going to negative infinity, the integral from s to zero.

Â The other, as T, goes to positive infinity of the integral from 0 to T.

Â When we have two tails, we need two limits.

Â And I don't think it's too hard to see

Â what the behavior of each of these is going to be.

Â If we consider what happens when x gets very large,

Â then the leading order term is 2/x.

Â And what is left over is 1/(1+(1/x)) squared.

Â That means that using, say, the geometric series,

Â we see that the leading order term is a p integral with p equals one.

Â That connotes divergence.

Â And this means that each of these limiting integrals diverges.

Â And hence the net integral does not converge to zero, it diverges.

Â