The expression stretched to the limit is apt in calculus. We push functions all the way to the boundaries of possibilities. The concept of a limit captures precisely the notion of accessing values that at first sight appear to be forbidden or out of bounds. In this video, we look at a range of phenomena that involve getting closer and closer to some kind of ideal point or value. We also make explicit some examples and notation involving infinities, which encapsulate the idea of being free to go on and on and on. How many times have we said ad infinitum in earlier videos? Slopes of tangent lines to curves are a prototype for this idea of an idealized value. Imagine you inhabit a typical smooth curve, it's your universe, and you're unable to access points on the curve to create secants. You might never be able to jump off the curve onto the tangent line. But the way the secants behave near the point of interest may be enough to tell you what's happening out there on the tangent line. In the last video, we demonstrated by pure thought and manipulation of symbols that the slopes of secants to the parabola y equals x squared approach the slope of the tangent line, which happens to be two times x at the point with input x, an amazing achievement. We realized that the underlying method works for any smooth curve, leading to the notion of a limit in this notation. Lim is an abbreviation for limit and refers to some limiting value or behavior that is being approached. Let's discuss two similar looking but contrasting examples, f of x equals x squared minus one divided by x minus one, and g of x equals x squared minus two divided by x minus one. These are both examples of rational functions, ratios of polynomials. In both of these cases, x equals one creates a problem in the denominator because trying to evaluate them with x equals one means dividing by zero, which is forbidden. Let's do some exploration for x close to one. f of 1.1 you can check evaluates to 2.1. f of 1.01 evaluates to 2.01, and f of 0.9999 taking an input very close to it but just a tiny bit less than one, evaluates to 1.9999. We seem to be getting closer and closer to two by taking our inputs closer and closer to one. Now, let's try the same thing using the rule for g, which at first sight appears to be only slightly different. g of 1.1, you can check evaluates to negative 7.9, g of 1.01 evaluates to nearly negative 98, a largish negative number, and g of 0.9999 evaluates to over 10,000, a very large positive number. The behavior of g for inputs near x equals one looks wild and unpredictable, compared with the apparently stable behavior of f. Let's try to understand why the values of f appear to be converging to two. In the rule for f, the numerator factorizes as x plus one times x minus one, and we get cancellation, and the entire fraction simplifies to x plus one. Throughout, we're assuming that x doesn't equal one. So, the rule y equals f of x is almost exactly y equals x plus one, the rule of a straight line. The only thing missing is a tiny hole with a rule for f prohibits an input of x equals one. Observe that as x approaches one from either side, we slide up and down the line approaching the whole, while at the same time, the values of y approach two on the y-axis. We use limit notation and say the limit of f of x as x approaches one is two. This confirms what we anticipated from our exploration. So, what about this wild function g? Let's try to simplify the fraction g of x by doing a long division of polynomials, dividing x minus one into x squared minus two. In a few short steps, we have the quotient of x plus one, and a remainder of minus one. Hence the fraction g of x can be rewritten as the quotient x plus one plus the remainder minus one divided by x minus one. Notice that the first part, x plus one is the rule for the line that we saw before in analyzing f. We're also taking away a piece one over x minus one, which is the rule for hyperbola. So, the rule for g involves a line and a hyperbola. Let's focus on one over x minus one, which is closely related to the simple hyperbola, y equals one over x. To get the graph for y equals one over x minus one, you translate the hyperbola for one over x to the right by one unit, and note that the line x equals one becomes the vertical asymptote. As x approaches one from above using this arrow notation with a little plus sign as a superscript, we see one over x minus one shoot off towards infinity, but which we mean, getting arbitrarily large and positive. As x approaches one from below, but this time with a little minus sign as a subscript, we see one of x minus one shoot off towards minus infinity, but which remain getting arbitrarily large and negative. So, we have these two contrasting behaviors of one over x minus one expressed using limit notation and the infinity and minus infinity symbols, which is notation for getting arbitrarily large and positive or negative. Let's combine these with the overall rule for g. As x approaches one from above, x plus one approaches one plus one equals two, which is straightforward, and one over x minus one becomes arbitrarily large and positive. So that when you take this away from something close to two, we get something large and negative. We capture this using limit notation. By contrast, as x approaches one from below, again, x plus one approaches two. But this time, one over x minus one becomes arbitrarily large and negative. So that now, when we take this away, we get something large and positive, and we capture this using limit notation. So, we have these two concise descriptions of the behavior of g of x as x approaches one either from above or below. But there's more information available from the hyperbola. As x gets large, positively or negatively, one over x minus one approaches zero. We can see the overall effect on the rule for g of x. For large positive or negative x, g of x becomes approximately x plus one because the piece, minus one on x minus one is close to zero. The rule for g becomes more and more identical to the rule for the line y equals x plus one. The further you are away from the origin, geometrically, that line becomes an oblique asymptote for the curve. We can visualize what happens. Here the axes, the vertical line x equals one and the oblique line, y equals x plus one dotted in. All of the asymptotic behavior is captured by this green curve in two pieces or branches which is the graph of y equals g of x. The line x equals one is the vertical asymptote, and the line y equals x plus one is the oblique asymptote, which becomes a better and better approximation to the curve, the further you are away from the origin. You might be curious how we knew to draw it exactly like this, and why, for example, there are no wrigley bits or strange things happening in between the asymptotic behavior. In the next module, you'll be armed with techniques of curve sketching to determine very precisely the nature of curves like this one, by exploiting the derivative and the second derivative. So, let's return to both our examples f and g. The graph of f we saw was in fact a line, but with one point missing. As x approaches one from either side, f of x approaches two captured by this limit statement, and this confirmed our exploration. By contrast, now that we know about the graph of g, we can understand the wild fluctuations. Two statements using limits describe what happens as we approach one from either side, explaining the large positive and negative values that we saw. Then we have this other interesting phenomenon that as x gets large, either positively or negatively, the curve is approximated by the straight line y equals x plus one, which forms an oblique asymptote. There are a number of important limits involving infinity symbols. If you take the simplest hyperbola, the graph of y equals one on x that we used earlier, then one on x approaches zero as x gets arbitrarily large positively or negatively, captured by these two statements in limit notation, and the x-axis becomes a horizontal asymptote. By contrast, if x approaches zero from above, one on x gets arbitrarily large and positive, or if x approaches zero from below, one on x gets arbitrarily large and negative, captured by these two statements in limit notation, and the y-axis becomes the vertical asymptote. If we look at the natural exponential function, we see that e to the x gets arbitrarily large and positive as x does captured by this statement in limit notation. And notice that we're using the infinity symbol as a value for the limit even though it's not a number. It's just notation and says that the values become arbitrarily large without bound. From the curve, you can see as x gets arbitrarily large and negative, the value of each to the x gets closer and closer to zero captured by this limit statement, and the x-axis becomes a horizontal asymptote. The contrasting behaviors of e to the x as we move along y from the origin in each direction are captured concisely by the limit notation. If we reflect this curve in the y-axis, we get the curve y equals e to the minus x, and we get the corresponding limit statements, which are mirror images of the first two. If we reflect the curve y equals e to the x in the line y equals x, then we invert the function and get the natural logarithm. As x gets large and positive, so does ln of x, but, in fact, very slowly. This is called logarithmic growth in stark contrast to exponential growth. Instead of a rapid explosion, the natural logarithm grows at a snail's pace, and even then, a very, very almost unimaginably slow snail. Finally, we note that as x approaches zero from above, ln of x gets arbitrarily large and negative, expressed using this one-sided limit, and the negative half of the y-axis becomes a vertical asymptote. We have introduced so many different ideas, all unified by the concept of a limit, which often captures or summarizes some kind of ideal value or behavior in the faraway distance, or evolving getting arbitrarily close to some prohibited input. All of those can be expressed concisely using limit notation, and various tricks using infinity and minus infinity symbols to indicate getting arbitrarily large positively or negatively, and plus and minus symbols use the superscripts to indicate whether we're approaching something from the right or left, from above or below. We saw examples of rational functions, where you can have, for example, a complicated looking rule reduced simply to a straight line with just one point missing. By contrast, another function with almost the same rule but having much more complicated behavior involving an oblique asymptote. Please read the notes, and when you're ready please attend the exercises. Thank you very much for watching even if you're also stretched to the limits by all of this new material. I look forward to seeing you again soon.