Okay, we are here now and at the cusp of what I hope will be our final result of this analysis of the finite element method. So this is, best titled, Convergence of the Finite Element Solution. Okay. Before I state the result and prove it there's one result that I need to recall from the, from a few segments ago and generalize it. Okay? So recall that when we actually started talking about the analysis of the method. We used this or I stated the fact that the H 1 norm is equivalent to the energy norm, okay? So recall equivalence of H1 and energy normals, Right? And the way we stated that result was the following. We said that If we look at c1 times the one norm of v, right? Some function v. This is lesser than or equal to the energy norm of v. Well, this is actually the energy norm squared, so we do that. Okay? And it's also less, it can be bounded from above by the energy norm. And the energy norm itself can be bounded from above by some other constant multiplying the v 1 norm. All right, and I believe there was also a question about what, what exactly we meant by this equivalence and how it held why it might hold. And I said that well, it really, it really rests crucially upon the fact that when we speak of these norms, we are talking of quantities that are, real and therefore bounded, and not infinite. And then using the fact that our domains are, finite, we can actually prove this result. Okay? So so we have this. Now it, it, it emerges that this equivalence between the h norm and the energy norm extends in fact to the general Hn norm also, okay? So this extends to the H n norm. Right? So this says that if we just look at c1 times the n norm of v, it is bounded from above by the energy norm. And that itself can be bounded from above by some other constant multiplying the n norm of v, okay? And the re, and the reasons why this holds are pretty much the same with the H1 normal. Why, where the result holds for the H1 norm. Okay? So this is what we're going to use. All right, so here is our statement. Again, I'll state it as, as, a theorem. The theorem is 1 on the, um,n norm of the finite element error. Okay? And it states that it can be bounded from above by some constant, whatever constant, some constant. Right? So let's denote this constant as c bar. h e, again, raised to the power of alpha, where alpha is the same exponent that we came up on in our sublib interpolation error estimate, okay? This times, the r norm of u. Okay? So, this looks very similar to what we saw for the sublib error estimate except that now we are indeed talking about the finite element error estimate. Okay? We are indeed talk of what happens with the finite element error, right? And we just remember that this is u h minus u. Okay? All right, proof. And let me just start, all right, I can start the proof here. And to start the proof, let us actually start with this s, result I just stated at the beginning of the segment. The result just above theorem. Right? And let us start by writing c1 times energy norm, sorry, n norm of, finite element error. Using the result we just stated above, it is clear that that can be written as being bounded by the energy norm of the error. Okay? All right now, let's move on. Okay. I'm actually going to reproduce this, this last statement on the now slide here. c 1 n norm of the finite element error is lesser than or equal to energy norm of the error. Okay, now the energy norm of the error itself you recall from a couple of segments ago, is bounded by a quantity. Which is also the energy norm of u h minus u. Right? We call this the best approximation property. Right? This is the best approximation property. Okay? Now I want to state something about notation here when one is working with inequalities. When one is working with inequalities and I write inequalities running in this fashion. The way to read it is that, what we have here, the energy norm of e is greater than or equal to C1 times the N norm of E. When we come down the second line what we're seeing is that what we wrote on the first line is itself less than what we wrote upon, on the second line. So it's very similar to actually taking what we have on the second line here and, moving it back here, okay? That's how we understand these, these, that's, that's the notation that's used when no, one is writing on multiple lines, okay? So what, really what this is stating is in fact, the best approximation property, right? Because there we did indeed prove that the energy norm of the error is lesser than or equal to the energy norm of this particular quantity, okay? But now note that when we choose those functions Okay, included in those functions was this interpolate U tilde because After all was just functions living in Sh, right. And satisfying that there is a boundary condition. Since U tilde h also lives in s h and is in fact equal to the exact solution and the nodes it also does satisfy the conditions, right. So it is in, in fact one of these members, right. So, the, the reason that we get this result is the best approximation property and the fact that U tilda h also belongs to Sh, okay? All right, but now, one can again invoke the result as stated at the, before we started this theory, right, on the equivalence of thee energy norm and the end norm. So we can see that that energy norm that we have on the right itself is lesser than or equal to, let me see this. Right, I could directly write this as some other constant C2, right? Times the N norm of U tilde h minus U. Okay? Just from the fact of our statement on equivalence of norms, okay? All right, so, we have this, and, what we can do now is invoke, so, this, this, this result follows from equivalence of Hn and energy norms. Okay, now continuing to work. We recall our sublet interpolation error estimate. Okay, so That interpolation error estimate is one that is applied to U to the H minus U, right? because that's interpolation error, okay? And that was the U to the H minus U is, lesser than or equal to, some constant CHE to the power alpha times the R norm of our exact solution. Okay so this is the interrelation error Estimate. Okay? All right, but now look at where we've got. We started out with C1 times the N norm of our finite element error and through this process, right, of repeatedly invoke, of invoking various results that we've accumulated. We have arrived at the fact that it is bounded from above by the term on the right hand side. Let me now just collect these results to say that, in the N norm of the finite element error is lesser than or equal to C2, which is a constant, times C, which is a constant, and divided by C1 which is yet another constant, times He to the power alpha Okay, now, this is our new constant C bar. Okay, so, our final result is that the, the finite element error in the N norm is lesser than or equal to c bar, it's a constant He to the power alpha times the r norm of u. Okay, that's our result. Now, let's let's just spend a couple of minutes examining this result, okay? What it tells us is that again, you know if we look at cases where the solution the exact solution is very smooth for U sufficiently smooth. [SOUND] What you're seeing is that alpha that is defined as the minimum of K plus one minus N and O. Okay, in this case, we're working with the n norm, right? So we have k plus one minus n. I'm sorry. Okay? And, r minus n. Okay? Alpha is a minimum of those two quantities, right? So for u being sufficiently smooth what we see is that alpha is indeed equal to K plus one minus n. right? Where k is the, polynomial order of our baseless functions. Right? And n Is really the number of derivatives we are taking in calculating the norm of the error. Okay? Alright, to get a little more insight, lets consider. Let's consider what happens with let's consider special cases, right. Consider n equals 1. So we're considering the, the 1 norm, right, the h 1 norm, and Well let's just consider n equals one. Okay? In this case, our result becomes that the h 1 norm of the error, what is, what do we mean by the h 1 norm of the error? What we're seeing is we're looking at the error in the finite element solution. Square integrating it and also square integrating the derivative of the error. So we're trying to gain control of not only over the, over the error but also it's derivative. The demanding that the error is somewhat smooth, because we want to say that even its derivatives should not get very big. Okay? So we're not saying that the error shouldn't be too much, but it also should be smooth. All right? Okay. So this thing now is lesser than or equal to our constant c, r, he, okay. To the power k plus 1 minus 1 times the r norm of U. Okay. So, it is saying that if we are looking at the H1 norm of the error, basically we are trying to the error itself and its derivative. Right? It converges at the rate of k, right? Our polynomial order, right? So the linear baseless functions, right? For k equals 1, okay? We get the error in the, in the one norm is essentially proportional. Right? To, well, I may not say proportional. The error in the, the the one normal error is bounded by C bar he. So, indeed as we refine our mesh, he goes to zero, right? Because he tends to zero. The one norm of the error will also tend to zero. Okay? But note now, that if we were to take, take, if we were to go to higher polynomials. Right? The same one norm of the error would now converge at a higher rate. Okay? All right. Okay. Now so this is the main sort of result that we want to establish. And of course, as we go to higher and higher order polynomials, our, our error converges more rapidly. Now, if we want to compute the error in the L2 norm, right? Which is equivalent to the H0 norm. There is an additional technique that needs to be invoked. It's called the Aubin-Nitsche. Aubin-Nitsche method, which we wont do, because that takes a lot more work. Okay? But essentially in this setting, the result is the following. If we consider the L2 error, right? And we'll make it explicit that we're talking about the L2 error. Now we're just talking about square integrating the error, right? No derivatives. Then it, the form is essentially the same. Okay? In this case, we get h to the power of k plus 1. No derivatives, right? So we don't lose any powers here. We don't have k plus 1 minus n. Okay? Times the r norm of the exact solution. Note that of course, we are assuming that the exact solution's sufficiently smooth, so that exponent alpha always reduces to k plus 1. Right? So this is the condition for the L2 error. So what this means now is that if k equals 1, the L2 error of our finite element solution converges quadratically. All right? It converges at he to the power of two. Right? And so on, right? So, if you had k equals 2, we would converge at the rate he to the power 3. And so on. Okay? So, essentially this summarizes the the key result that is used in conventional finite element error analysis. It says that as we refine the mesh, our solution converges. Right? And depending upon the measure of error we want to use either just the L2 norm of the error, which would be just square integrating the error and making sure that that quantity is decreasing. Okay? Or whether we wanted to include higher order derivatives of the error. We can, we can, we can address all those problems. Okay. We can address all those questions. In all of this at least in the, in, in the way I try to, to, to bring out to, to give you greater insight into what is happening we're assuming that the exact solution itself is smooth enough. And note that this is a requirement. If you start out with a, with trying to solve a problem whose exact solution itself is very badly is, it's very irregular, you can't hope to converge to, to solutions easily. Right? It just becomes much more difficult. There are methods that, that, that, that can be used to address such problems and also to carry out the error analysis of these, of these methods. The other thing I should note is that the interest in, in compute and always looking at errors, which involve an integral over the domain is precisely to make sure that we are actually controlling the error all over the domain. Okay? And this is why we always take into the square integral of the, sorry, the, the, the error itself squared and integrated or the or we compute it a further derivative on it and square and integrate that. Right? The whole reason for integrating it is to make sure we have control over the error over, over the entire thing. Okay. This is a good place to end the segment.