Welcome back. What we've accomplished in the previous segment is a explication hopefully of the matrix vector form of the problem for lin, of linear elastodynamics in 3D. Now we proceed with the time discretization and understanding a little bit about the about the methods that are used to solve this particular problem, which is also a ODE now. Okay, so let's start with the matrix vector problem. Okay. What we derived at the end of the last segment was the following. Md double dot plus Kd equals F, right. This is the second orderal d in time and we have initial conditions. D at time T equals 0 is d naught, and d dot at time T equals 0 is that vector, V naught. Now, we can proceed from here, but it's useful to include one extra element in here. And that extra element is a sort of throwback to the times when to the times really even before finite element methods became very popular in structural mechanics when it was common to write out matrix equations of this sort for for structures, right? People would, would, the notion of using nodes and degrees of freedom had, had already been established, especially in the context of structures like crosses and frames and so on. And in that setting of structural mechanics, it was common to include in addition to the mass and stiffness matrix that we see here, matrices that we see here, a damping matrix, okay? So and, and here's how this was done, okay? So including the effect of structural damping. Okay. The way this will be done will be to include a damping matrix of C. Again, I'm just following the standard notation that tended to be followed in this business. So, we have yet another matrix. All right, so we have C which would often be modeled using what is called Rayleigh damping. And this was done in a very sort of simple, very empirical manner by taking some constant, a, multiplied with the matrix M plus some other constant b, multiplying the matrix K. Just like that. Okay? All right? Empirical, but it was found to work. And, there are other reasons why this works. We don't need to go into those reasons. This is the model for what is called Rayleigh damping. Okay. Where a and b are constants. All right? And that was it. There was no attempt to try and derive these this new damping matrix from any more fundamental partial differential equation. That can be done but, but, but it was not necessarily done in, in, in writing this out, which is just explicitly written out in this form. Okay. And then the form, the damping would be, would be included in, was the following. So now we would get the equations of, elastodynamics. With structural damping. Right. And that equation would turn out, would be Md double dot plus Cd dot plus Kd equals F, all right. Where the idea of damping was, that this is some sort viscous damping essentially that was being modeled, and you see the effect of viscosity. If you're familiar with that sort of physical phenomenon and the fact that you have a single time derivative on this extra term that's been introduced to the problem. Okay, that's just like a first, that is indeed a first time derivative, a single time derivative on the d vector. Right, plus boundary, sorry, plus initial conditions as usual. Okay? This is essentially the model. Now, from here, one can go on and write the time discretized form, just as before, right. From here, what we do is for time discretization. All right, we do exactly what we did before, which is to say that our interval, zero to T we write as the union of all these time intervals, of t0 to t1, so on all the way up to t sub N minus 1 to tN. Right? It's the union of all these intervals, right, of each of these subintervals where t naught, in the way I've set up the time interval, t naught would be 0. Tn would be capital T. Okay? We have everything just as we knew from before. All right? And then we say again that d At n is the time, discrete. Approximation. Write approx for short. The time discreet approximation to d at t n. Okay, just as we did for the parabolic problem. Okay, and then we get our time-discretized, our time-discrete matrix vector equation. Okay. Let me go back, just for a second to have you look at it. Right, we have it here. So, you see M d double dot plus C d dot plus K d. Now d double dot, because d of course displacement, d double dot is essentially the acceleration. Right, likewise d dot is indeed the velocity vector, right, at the global degrees of freedom. And d is the displacement factor. With this is mind, the time-discrete matrix-vector equation is often written as M at n multiplying a n plus one, where a n plus one now is the acceleration, right. The approximation to the acceleration at time t, n plus one. Plus C v at n plus one, v being the velocity, plus K d n plus one equals F at n plus one. Okay? With initial conditions now being that d not and v not are known. Okay? All right. Now, the family of algorithms that is commonly used to solve this equation, this time-discrete form of the equation is what is called the Newmark family. Newmark family of, algorithms. For second order ODEs, right? They're second order because they're second order in time. Okay? The way this family works is the following. For, now, we need to have some parameters for this family. Just as for the Euler family of algorithms for first order equations, we have our parameter alpha. Here, because these are second order ODEs, we need two parameters, it turns out, and one of those parameters I'm going to denote as gamma. And gamma belongs to the closed interval zero comma one, just like alpha did. And the other parameter I'm going to write as twice of beta, where twice of beta now belongs to the interval zero to one. Alternately beta belongs to the interval zero to half. Right? The closed interval zero to half. All right? Now, with this in place, here's how the Newmark family works. It says that d at n plus 1 equals d at n plus delta t times v at n plus delta t squared over two, times, 1 minus 2 beta, multiplying a n. That's the approximation at the acceleration at time tn, plus 2 beta, multiplying a n. That's the approximation to, to the acceleration at time t n plus one. All right? And then because it's a second order algorithm, we need something for v n plus one as well. V n plus one is equal to v n plus delta t times one minus gamma at a n, plus gamma at a n plus one. Okay? Those equations together with our time discrete matrix factor equations written above here, and of course initial conditions, right? Initial conditions here are just that d not and v not are known, okay? This summarizes our, family of algorithms for linear elastodynamics, okay? Now, let's talk about solution techniques, 'kay? The solution technique, that I'm going to talk about, I'm going to talk about a single approach, not two approaches as we did for the parabolic problem. The method I am going to talk about is what's called the a method, the a being for acceleration. Okay, and here is how it works, we again define predictors and correctors. Right? We say that d n plus 1, tilde, equals d n plus delta t v n plus delta t squared over two. One minus two beta a n. Okay? That's the predictor for d. The predictor for v is v n plus delta t. I don't need anything in the denominator, it's just delta t. Times 1 minus gamma a n. All right? And just as we did before for the parabolic problem, we've looked at the update formulas for d and v and simply extracted out those parts of the formulas that come from everything known at time t n. Okay, so these are our predictors. The correctors are. Right, those are predictors, and the correctors are the following, right. The predi, the correctors are obtained by simply writing d n plus 1 equals predictor Right, plus corrector. Now the corrector for d n plus 1 is delta t square beta a n plus 1. And the corrector for v is. This one, right? These are the correctors. Right, and what I've done now is write the corrector step for both of them. The a-method essentially, just like the, the methods in the case of the parabolic problems, the a-method is obtained by substituting these corrector steps in the original equation. Okay? So what we get is on substituting these corrector steps. Right, on substituting we get the following. M times delta t square beta plus C delta t gamma plus sorry. I'm getting ahead of myself a little, so let, let me just rewrite this line. I'm trying to skip a couple of steps, and I realized I was already making errors, so I'll just back up a little. Okay, so when, when we substitute these correctors, what we get is the following. We get M a n plus 1 plus C multiplying, C multiplies v n plus 1. So we get the predictor plus corrector. And for K, K multiplies d n plus 1. So again, we get predictor plus Corrector. Okay, this is the entire left hand side. All of this equals F at n plus 1. Now this is the so-called a-method, and by using that, those predictors and correctors, what we've done is to rewrite the equation entirely in terms of a at n plus 1 and predictors for d and v. All right? So this essentially then lets us rewrite this as M plus C delta t gamma plus K delta t square beta, all of this multiplying a at n plus 1. Equals F at n plus 1. And then the terms multiplying the predictors are just moved over to the right-hand side. And why can we do this? Right, it's because the predictors are known, right? They depend only upon the solution at n, which we always assume we know when we construct these time-stepping algorithms. So we get here, right, we get minus C d n plus 1 tilde minus K d, sorry. Sorry, it's C v n plus 1 tilde. C v n plus 1 tilde plus K d n plus 1 tilde. All right? That's our method. Okay, we can now go ahead and invert this, solve for a, once we have a at n plus 1, using our corrector steps we get back d n plus 1 and v n plus 1. Okay? The only thing we need in order to sort of start up this algorithm is a at 0. Okay? In order to get A at 0. Just use the equation at 0. All right, and by equation here I mean the time-discrete equation. And that is M a 0 equals F at 0 minus C times v at 0. This works because v 0 is known, right? It is just the initial condition. Minus K d at 0. d 0 is also known. All right? v 0 is known, d 0 is known. Okay? Okay. So here we have it. That is our standard solution approach for this problem. We can end this segment here. When we return, we will start our analysis, and that analysis also is going to be based upon our approach of modal decompositions. All right?