Welcome back. We are moving steadily ahead with our analysis of our Euler family of algorithms, or time integration algorithms for our parabolic PDE. What we've managed to accomplish in the last segment was an understanding of stability properties. And I'm going to start this next segment with very quickly summarizing that those results and moving on from there. Okay. So, the result that we had was for stability. [SOUND] Okay? What we found was that for the Euler family, right? If I look at it, look at the stability results in terms of this parameter alpha. Right? What we find is that for alpha greater than or equal to half, we have algorithms that are unconditionally stable And what this means, is that it does not matter how big our time step is, okay? So unconditionally stable, also meaning any delta t greater than 0. Okay? Any of them will give us stability. Right? For the other cases when we have, alpha lying between, alpha less than one half, and remember alpha has to lie between 0 and 1, right? So, it can't get smaller than 0. So, if alpha is less than a half, what we have are methods that are conditionally stable. All right, and the condition that we obtain is one on delta t. We find in particular that delta t must be a lesser than or equal to 2 over 1 minus 2 alpha, times lambda h. All right? Now, I want to say something more about this result. Observe that for lambda h, which is the the discrete eigenvalue corresponding to the particular mode that we are looking at, right? Lambda h can take on different values for different modes. So when we are looking at the full matrix vector problem, what we need to ensure is that this conditional stability is satisfied for all modes. Okay? So this result must hold for all modes. Right? Consequently, what we want to look at, what we want to impose is this condition for the maximum eigenvalue. All right? This implies that really the kind of condition we must work with for our matrix vector problem, is that delta t is lesser than or equal to 2 over 1 minus 2 alpha lambda h max. Okay? The maximum I can value over all modes. Right? And that is the, something we obtain from an eigenvalue decomposition of the problem, which also we have looked at. The final result we noted, is that lambda h and therefore lambda h max also, varies as the elements size to the minus 2. Okay? All right? And what this implies then, what this means for our methods, is that the spatial discretization does in deed fact affect our, time integration, right? In particular, it, it affects our choice of time stamp. Okay? What this translates to then is that delta t is or, or delta d max really, if one is following this conditional branch of the algorithms, of, of the stability of the algorithms. It means the delta D max,uh, goes as, h square. Okay? Consequently as we refine the mesh, the spatial mesh, we are constrained to using smaller and smaller maximum time steps. All right? Okay, so this really a summary of the type of our stability results. What I would like to do to move on is revisit our amplification factor, okay? So let us recall, let's recall the amplification factor. All right. And that is something we denoted as A. And if I recall correctly, A is equal to 1 minus 1 minus alpha delta t lambda h divided by one plus alpha delta t lambda h. All right? This is the amplification factor. And we recall also how it applies to our algorithmic problem. We see what we have is that dn plus 1 is equal to A dn, okay? Now, what I want to do is look at the effect that this amplification factor has on the behavior of high order modes. Okay? So, we are going to use this fact to look at the behavior of high order modes. Okay? And for our implementation, what this means is, first of all what do high order modes mean? What is a mode of a high order? What characterizes the order of mode? It's not simply the number, right? It's not simply what number we ascribed to it, the m or the l that we've been using, right? We know that m equals 1 to ndf and the, where mappears is in labeling the mode, right? Psi m. Right? Just m having a large number does not make it a high order mode, that is simply an arbitrary choice we have made of labeling modes. Right? So what does make a mode high order? It is the value of lambda h. Okay? Essentially large values of lambda h are the higher eigenvalues, and in any linear system those are the higher order modes, right? For the system, it turns out that those are the modes that have higher order spatial frequencies. Okay? So so it's lambda h, but really we can we, once we have lambda h and we have a what we know about lambda h, a once we know we have a time step, this really implies lambda h delta t. Right? And it's doesn't need convenient to work in terms of lambda h delta t, because this is how it shows up in our problems, right? Okay? All right, all right, so let's look at what effect we had on our amplification factor. And to do that, I am going to plot up on the vertical axis, the amplification factor. And on the horizontal axis, I'm going to write, I'm going to plot this quantity, m, lambda h delta t. Okay? So the idea is that we are looking at the effect of higher order modes and, and, but, but the way we are plotting it up and the way we're, we're studying it we could get to the same behavior either by looking at lambda h large or just the effect of having a very large delta t. Okay? All right, so here I'm going to plot like I said I'm going to plot the amplification factor. Right? The formula for which we have on the previous slide. All right, it's, it's useful by the way, also to look at what the amplification factor is for our exact equation. Okay? And I'm going to write that up here for the exact equation, for the, for the exacting, the time exact equation. All right, the time exact single degree of freedom model equation we know that d of t equals d at 0 exponent of minus lambda ht. Okay? So effectively we have here a sort of time-continuous amplification factor. All right, we can just look at this quantity as being the amplification factor, right? And then if we apply again to the idea that we want to look at how it varies between one times step in the other, all right, from tn to tn plus 1, what you will see is that the amplification factor for the exact problem is essentially exponent of minus lambda h delta t. All right, the idea is that you could take this exact equation, simply write it out as a mapping from dn to dn plus 1, okay? And then you will see that the amplification factor you get is exactly the exponent of minus lambda h delta t, right? So, and, and, so that's just an exponentially decaying function, right? So let's write that one out first. We see that the amplification factor can have a maximum value of 1, all right as written here, right? The exact amplification factor. And I'm going to try to draw a an exponentially decaying function, and hopefully I can draw it reasonably smooth. Okay? So, this is what I'm going to call A exact. Okay? And in parentheses here I'm going write out the limit that A exact tends to as lambda h delta t tends to infinity. Okay? Clearly, as lambda h delta t tends to infinity, exact goes to 0. Right? Okay, now we're going to return to our actual amplification factor, right, our algorithmic amplification factor. And look at what value it takes in the limit for the different members of our, family, okay? So, since I have all this room here, I'm going to make use of it and I'm going to write the actual amplification factor here. And recall that it is 1 minus 1 minus alpha delta t lambda h divided by 1 plus alpha delta t lambda h. Okay? Because we want to look at it in the limit as lambda h delta t tends to infinity, I'm going to simply divide through by that quantity. Okay? So I get 1 over lambda h delta t minus 1 minus alpha divided by 1 over lambda h delta t plus alpha. All right? Okay. Now, so I'm going to look then at for various values of alpha, I'm going to look at what happens as limit lambda h delta t tends to infinity. Okay? Okay? Let's start at the top of the range of alpha. Say alpha equals 1. All right? For alpha equals 1, when lambda h delta t tends to infinity, if you look at that limit, you basically get 0. Okay, all right? So this function right so let me plot this out now. Let me use here's black. All right, so if it is as something like this. I should admit here that I'm not paying very careful attention to how it behaves relative to the, relative to the exact exponential at the left end of the limit. Right, I'm, I'm not paying much attention to that, so, but you can check that, okay? So, don't worry about the behavior on the left. What I'm interested in is the behavior for high order modes, right, which is 4 lambda h delta t tending to infinity, right? So what we have here, is that A, and instead of saying exact, I'm going to say, I'm going to write as a subscript here the, the value of alpha. Okay? So a1, and of course I'm using the colors, so a1 as lambda h delta t tends to infinity, which is the, the amplification factor now for the backward Euler algorithm, right, was alpha equals 1. This thing also goes to 0. All right? Let's change colors here again, go back to red. What I'm going to do is write them for all the others, okay? Let's write, and I'm going to write them only for, for the ones that we really care about here. Let me write alpha equals half, right? Which is the Crank Nicolson method. What we see is that for alpha equals half, substituting that value of half in there, we see that the amplification factor goes to minus 1. Okay? And for alpha equals 0, which is the fold Euler method, we have what do we have? We see that that limit is actually, the limit doesn't exist. Right? Because as we we simply set alpha equal to 0 that traction becomes unbounded, right? But let me allow me to write a unreal number, an unreal number, basically it's unbound, so it goes to infinity. Okay, that's not strictly to say that infinity's not a limit, right? But anyhow, you know what I mean. All right, so this is the situation we have. Let's let's look at what these things appeared like. So if that is 1, then this is minus 1. Okay? And okay, I'm going to show you what Crank Nicolson looks like on this, in green. Crank Nicolson essentially tends in the limit to minus 1. Okay? So a half, or, or the midpoint rule, as I used to call it, a half tends to minus 1. Okay? And finally for our backward Euler algorithm, what we see is that we actually get sorry for our forward Euler, we've already plotted our backward Euler. For forward Euler, what we see is that it is a it's a straight line. Okay, it's a straight line, and with, with a negative slope it falls to right, it leaves the plot. Okay, so this is a for 0, the forward euler algorithm, which is infinity. Right? It's actually it goes to actually minus infinity. Right? Okay? So that's how they behave, and you can plot up some of the others also in between here. So, what does this mean? If we stare at this plot, what you see is that when it comes to the high order modes, it is backward Euler, which has the behavior of the exact equation. All right? Alternately, we may state this as saying that backward Euler tends to dissipate with high order modes. This is numerical dissipation, right, in addition to the, to the physical dissipation that exists with the heat conduction or the master fusion.