[MUSIC] Welcome again to this third week of the course, assimilation in modeling of natural processes. In this module we will talk about the error we are doing when we are approximating a numerical solution for the differential equation we just talked about. So first, let us define the concept of error. So on the right of this slide you see again the same picture I used in the previous module. When we seem different numerical solutions, the blue, green, and red colors which are approximating better and better analytical solution, which is the black curve. So to compute the error, what we are doing is shown in the left part of the slide, is that we basically compute the distance between the black curve and the numerical solution. This distance will, mainly, depends on delta t. So for a formal definition of the error here, we have one way of computing it. There can be very different ways of computing errors, but here is the one we use here. So basically what we do is that we take the square root of the average of the distance between the analytical solution s of ti minus the numerical solution as the identity of ti squared. So we have N points just to give a definition, so each point is t plus a certain amount of times delta t. And there are a total of N points which is just the t at the end of our numerical evaluation minus the initial time over delta t. If we don't know an exact analytical solution to compute the error, what we can do is also use a very fine delta t, and use it as a reference solution to see how the error evolves, with respect to delta t. So, when we evaluate the error we can either compute it numerically, or try to have a flavor, an analytical flavor on how this error evolves. To do so, let's consider the first point as one, which is at t0 + delta t. So as we saw in the previous module, this value is given by the Taylor expansion of around s(t0). So s1 is equal to s0 plus delta t times f plus, a term which contains an error of order delta t squared. So it is a term that is proportional to delta t squared, but we don't really know the exact value of the proportionality constant. If we push this expansion to the second steps, so 0 + 2 delta t, we have, again, with a Taylor expansion that s2 = s1 + delta t times f plus another order delta squared term. So basically, if you replace the first equation in the second, what you see is that the s2 will be equal to some complex expression dependent on s0 and t0, plus 2 times delta t. So s2 contains two times the error. Remember that this is not a really formal way of deriving the error, it's just to give you an idea of how these error will evolve. So what we see is that each time we will increment the time by 1 delta t, we will add 1 order delta t squared. So, now let's go back to the error evaluation we did before. So, we define Ei, which is basically one of the terms of the error we just computed. And we define Ei as s(ti)- s delta t(ti). So replacing these definitions and the result of the previous slide in the definition of the error, we see on the bottom equation that E, will be basically proportional to a term, to the square root of a term proportional to delta t to the power four, over a term which is proportional to N times the sum of I equals to N of I squared. The sum can be evaluated analytically in the leading order term is a term proportional to N to the Q. So we can cancel one of these N's, and we are left with the square root of a term proportional to delta to the four, times N squared. Now, if we remember that N is nothing else than the difference between the final and initial time of our simulation over delta t. We can further simplify this expression and we are left with an order delta t squared. So for the explicit order scheme, we are left with an error which is proportional to delta t. So what does that mean? It means that when you will decrease the value of delta t, your error will decrease too. For instance, if you decrease delta t by a factor of 2, your error will decrease by a factor of 2 also. This can be seen in the picture on this slide where we pick the example of the growth of a population or we know in an interval solution, and we perform several simulations with different resolution, and we plotted the error. So the dots are the measured points of our error. The line is just 1 over N curve. Here we are in a logarithmic scheme, so this is only a straight line with a slope -1. To finish with this module, I will like to add a definition for the error of a numerical scheme. So the error of a scheme is said to be of order, k, when its error era is proportional to delta t to the power k. Okay, so this ends the module on the error of an approximation. In next module, we'll talk about another class of numerical integrations. Thank you for your attention. [MUSIC]