0:09

In this lesson, we'll turn our attention from the indefinite integral, a class of

Â functions, to the definite integral, a numerical quantity.

Â No doubt you've seen definite integrals before.

Â But do you remember how they're defined and what they really mean?

Â This is one of those concepts that takes a few readings to really sink in.

Â In this lesson, we'll give you a fresh look at the definite integral.

Â This lesson is all about adding larger and larger numbers of smaller and smaller

Â local amounts into some global sum. That's not too unusual of a thing to do

Â so let's do so in the context of a simple classical example.

Â Compute the sum, i goes from 1 to n of i. That is 1 plus 2 plus 3 all the way up

Â until n. Now, we could think about this a bit more

Â globally or geometrically by representing each i as a column of i squares each with

Â side length 1. The net sum then looks something like a,

Â a triangle with base n and height n but is disgratized into these squares.

Â What is the sum represented as an area? Well, the area of the triangle would be

Â one half n times n. But this ignores a few small, leftover

Â triangles, each with area one half. How many are there?

Â Well of course there are n such leftover triangles.

Â That yields a net area of one half n times n plus 1.

Â Now if we think about what it would take to add up 1 plus 2 plus 3, all the way up

Â until n, these are local additions, or local computations.

Â On the other hand, this general formula as a function of n is something that is

Â more global, and that's really the key intuition behind what we're about to

Â build. That is, the definite integral.

Â The definite integral is a generalization of this kind of reasoning to more

Â difficult or non-linear sums. The definition of the definite integral

Â is a little bit involved. So stick with me and review again as

Â necessary. We write the integral f of dx as x goes

Â from a to b. As a certain limit.

Â But what is that limit? How do we set it up?

Â Well first we restrict to the interval from a to b, and then we build a

Â partition. That is, we split this interval up into

Â sub-intervals. P1, p2 all the way up to pn, that fill up

Â the domain from left to right. Now, each sub-interval has a width

Â associated to it. This is called delta x sub i.

Â Within each partition element we choose a point x sub i, that is within p sub i.

Â This is called a sampling. Now it doesn't matter which point you

Â choose, just pick one. One per sub interval.

Â Then we first define the Riemann sum to be the sum as i goes from 1 to n of f

Â evaluated at the sampling point times the width delta x sub i, of the partition

Â element. This Riemann sum is often visualized in

Â terms of columns or rectangles sitting over top of the partition.

Â Then, with this in mind, the definite integral is defined to be a limit of

Â Riemann sums where it's an unusual sort of limit.

Â We taking the limit as the partitions get smaller and smaller, as the widths of the

Â partition elements goes to zero. Now you can see that as those widths get

Â smaller and smaller the dependence on the sampling seems to be less and less

Â important. And indeed that intuition does hold true.

Â Now there's a little bit of notation that goes into this.

Â First of all you should notice that that integral sign is really a form of the

Â English letter S in the same way that the summation sign is a form of the Greek

Â letter sigma. Both connote a sum.

Â So a definite integral is really a sum and all of the notation associated with

Â it matches the corresponding notation in the Riemann sum, where dx is something

Â like the limit of delta x as delta x is going to zero.

Â Now, the second thing to note is the limits of integration.

Â One often writes the integral from a to b of f of x dx.

Â I prefer to write the integral as x goes from a to b of f of x dx.

Â This tells you exactly which variable you're talking about in terms of the

Â limits. I'm not always going to use that

Â notation, but I will sometimes, and I suggest you do likewise.

Â Sometimes we'll be sloppy and just write the integral from a to b.

Â Lastly the variable with which you do the integration is not so important.

Â The integral of f of x dx as x goes from a to b is the same as the integral of f

Â of t dt as t goes from a to b. One could use other symbols, still what

Â matters is the value of the integral, not the name of the variable with which you

Â integrate. Sometimes we'll just write the integral

Â of f, from a to b if it's clear which variable we mean.

Â 8:02

Now, we need to choose a sampling point, 1 x sub in each p sub i.

Â For simplicity, let's just choose the right hand end point, i over n.

Â Then, the width, delta x sub i is a constant 1 over n because we have a

Â uniform partition; therefore, the Riemann sum can be expressed as a limit as n goes

Â to infinity, that is as the widths are going to 0.

Â What's the Riemann sum look like? It looks like the sum i goes from 1 to n

Â of x sub i. That's i over n times the width 1 over n.

Â Now, what's this limit going to look like?

Â Well, we're summing over i and n is a constant, therefore we can factor a 1

Â over n squared out of the sum and we're left with the sum as i goes from 1 to n

Â of i. And now comes the hard part.

Â Fortunately, we've seen that sum before. What's the sum, i goes from 1 to n of i?

Â Well that's really 1 half n times n plus 1.

Â And now we see that dividing by n squared, the leading order term in this

Â Riemann sum, is 1 half. Everything else is, a higher order, in 1

Â over n, and hence goes to 0, as n goes to infinity.

Â The answer to this definite integral is 1 half.

Â Indeed, as it must be. Do notice that the difficult part of this

Â computation was that sum of i, as i goes from 1 to n.

Â 9:55

Note also that the definite integral satisfies certain properties.

Â For example, linearity. If you have the integral of the sum of

Â two functions, f and g. Then it's really the sum of the

Â integrals. Otherwise said, if you add your two

Â integrands together, and then integrate, you get the same thing as if you

Â integrate the pieces, and then add them together.

Â This is true at the level of an individual Riemann sum element.

Â And so it's true in the limit. Likewise if you multiply and integrand f

Â by a scalar c then the integral is equal to that constant c times the integral of

Â f. Again, otherwise said, you can multiply

Â by a constant and then integrate. Or integrate and then multiply by a

Â constant. It doesn't matter.

Â You get to the same place which ever path you take.

Â Again, the reason why this is true is because it's true at the level of Riemann

Â sums and hence to a limit. Another important property is that of

Â additivity. Which states that if you take the

Â integral of f from a to b and add to it the integral of f from b to c, because

Â those limits match up you get the integral of f from a to c.

Â This certainly makes sense at the level of a Riemann sum, you can concatenate

Â these intervals together. We're going to think of it in terms of

Â adding paths together, a perspective that makes sense in the context of

Â orientation. That is, the integral of f from a to b is

Â minus the integral of f from b to a. Now why does this happen?

Â Well let's think of the following terms, if we were to move the integral from b to

Â a over to the left hand side of the equation we would get that the integral

Â from a to b plus the integral from b to a equals 0.

Â Why would that have to be true? Well, from additivity the limits match up

Â and give us the integral from a to a, which clearly must be 0.

Â That's one way to make sense of this orientation property.

Â Another way to think about it is that we are adding directed paths together, and

Â when you add the same path from a to b, with the orientations reversed it's as if

Â the paths cancel and you wind up getting the integral over a point, which is 0.

Â The last property we'll discuss is that of dominance.

Â That states that if f is a non-negative function then the integral of f over an

Â interval is also non-negative. From that follows a, a slightly less

Â obvious result. Namely if you have a function g, which is

Â bigger than f, then g minus f is non-negative.

Â Which means that the integral of g minus f Is non-negative, which by linearity

Â means that if g is bigger than f then the integral of g is bigger than the integral

Â of f. So much for the good news.

Â The bad news is we can hardly compute anything with this definition.

Â There are two definite integrals we can compute.

Â We can compute the integral of a constant by, let's say choosing a uniform

Â partition and then taking the appropriate limit.

Â You can see that you get a constant times the width of the interval.

Â The other integral that we can do is the one that we've done already.

Â The integral of x dx. If we do that over a general interval

Â from a to b, then I'll leave it to you to set up the uniform partition, reduce it

Â to a limit. Then get the answer, which is, as it must

Â be, 1 half times quantity b squared minus a squared.

Â That's about it. There's a little bit more that we can do.

Â For example, if we tried to integrate sine of x or cosine of x.

Â Not over an arbitrary interval but over a symmetric interval from negative L to L.

Â Then there are a few things we would observe.

Â For sine there's a symmetry about the origin which implies that every time you

Â have a partition element on the right, with say a positive value.

Â You get a corresponding partition element on the left with the opposite value.

Â These two will cancel and will give you an integral of 0, because sine of

Â negative x is minus sine of x. For cosine we can't quite do the same

Â thing, but we have a symmetry about the y axis.

Â Which means that every time you have a partition element on the right, it is

Â balanced by a symmetric partition element with the same value of cosine.

Â Therefore, we get a doubling. Because cosine of negative x equals

Â cosine of x, we can reduce this integral to one from 0 to L and double it.

Â This simple example has a more general pattern.

Â We say that sine is an odd function and cosine is an even function.

Â An odd function is one that has this symmetry about the origin or function for

Â which f of minus x is minus f of x. For such a function, the definite

Â integral over a symmetric domain from negative L to L is always 0.

Â Likewise, for an even function, when f of minus x is f of x, then the integral from

Â negative L to L is twice the integral from 0 to L.

Â Another way to think about odd and even functions is that the odd ones have an

Â odd Taylor series and the even ones have an even Taylor series all about 0.

Â Now in general you're going to have to be careful.

Â Definite and indefinite integrals are not the same type of object even though they

Â have similar notation. A definite integral is a number and a

Â limit of sums. The indefinite integral is an

Â anti-derivative in a class of functions. We'll soon see what they have in common.

Â