So, we will continue, and what we observed at the end of the last segment is that the only contribution we need to really worry about and, and, and expect it to be any different is the new one, right? And, so we consider the time-dependent term, right? Right? And the time-dependent term is this one. Integral over omega, w h comma, sorry, there is no comma here, that's the whole point. W h rho partial time derivative of u, d v. Okay, this is the one we need to consider, sorry u h here. Okay. It should be completely straightforward now because, we, we already have our, expansions for w h n u h, right? And, in fact, in fact, let's just make one observation. In order to consider this, what we need to do is use derivative of u h with respect to time, the partial time derivative is after all nothing. So, and, and there, furthermore, restricted over a particular element, right? That partial time derivative in an element e is nothing other than sum over a N a, we know all about those, d A e dot, right? Where the dot denotes a time derivative. Okay? All right. Well then. The way this works out then is that, integral over omega w h rho. Multiplying the time derivative. Is. Sum over the elements. Right. Integral over each element of the following. Right? Now, let's, let's write out the, the usual sort of summation that we have. Sum over a N a c a e, right? And we know that this first parenthesis is what gives us the w h term, multiplied by rho, and for the next term, we have now sum over B, N B, d B, e dot d V. Okay? Simple as that. All right, so I'm just going to carry along this sum over e, as well all right, because we know, we've done this sort of thing so often in the past that we know how it all works out, okay, so let's just do that. So, this is a sum over e right, I'm, as before now I'm going to pull out the sums over A com, A and B. We have here c A e, integral over omega e, N A rho N B, d V. We have there the integral, and we get here d B e dot, okay? All right, so, if you stare at what we have here for the integral, that I've put in parentheses, it should be clear that for a given element, when you carry out that integral for a combination of, basis functions on node A and B, you get a scalar, right, in every case. Okay. This, resulting matrix, or, or, or these resulting terms, are going to be of a form that we've not, encountered yet, before, okay? We have encountered related, not really related terms, but if you think about the term that gives rise to our, conductivity tense matrix or our diffusivity matrix for this sort of problem, it involves derivatives on the N A and N B, right? It involves spatial derivatives, right, we don't have any of those in this term. Right? So, this is a different type of term. I am going to denote this integral M A B sub e. Okay? All right. And this term that I have just written out, M A B sub e can be further assembled into a, a matrix vector form, right? And we know how that happens, right. So then from here, if we just use, c e equals c 1 e up to c number of nodes in the element e. Right? And the same thing for d. E. D e we know is just d 1 e, up to d number of nodes in the element sub e, okay? However, we also know that in the particular form that we are dealing with here, we have a d e dot, right? Because it, we, we get time derivatives of each one of these terms. I'll make this dot a little bigger than the dots of the ellipses, okay? So, when we get all of that, what we observe is that, this integral that we started out with, this, mystery new integral, essentially reduces to, integral over omega w h rho. Right, so this integral that we're trying to evaluate is essentially now sum over e, c e transpose, right? By assembling all those you know, by replacing the explicit sum over nodes A and B, right, the explicit, replacing that explicit summation, which we had on the previous, slide here, right, this explicit summation. What we're able to do by defining these element level vectors we know is get rid of that explicit summation, right, and we are masters of this, as well. Okay, what that means is that our M A B scalar terms, slot themselves nicely into a matrix, right? They slot themselves into a matrix that I am going to call the M. Bar. A B matrix. Sorry, it's not A B. I, I just got rid of the A B. The M bar e matrix. Okay? And that is multiplying a d bar dot e, vector. Okay, you also know why I am using bars here, right? Rather than the final forms, because we know that we can have this issue of Dirichlet conditions there. Okay, so just because, after we apply the Dirichlet conditions the final matrix, matrices that we have are going to be reduced in dimensions. [COUGH] Also, the final d vector, or, or the d dot vector that we have is going to be reduced in dimensions. Through exactly the process we've observed before, for handling Dirichlet conditions, okay? It's, it's, it's in, it's in sort of preparation for that, and in anticipation for that, that I'm calling these, this matrix M bar, and M bar e, and that, that vector d bar e, okay? That's all. Now, this matrix is what is called often the mass matrix, okay? In the context of finite element methods, any such matrix that's obtained by directly multiplying the basis functions, no derivatives, right, no spatial derivatives on the basis functions. Directly multiply them and integrate over the domain, maybe multiplying with rho, right, and you've that in some cases that rho could be 1, so that case is also covered, right? The important thing is that it is when we form a matrix of this type which involves a direct multiplication of the basis, functions over an element, right, and their integration, it's called a mass matrix, right, and in this case, of course, the element mass matrix, okay? All right. It's, it's a new matrix that we've not previously encountered, okay? Now, I should make a remark here. Which is, that if we have a general element that is far, that, that is not, that doesn't have one of its surfaces coinciding with the Dirichlet boundary, then of course we know that the, the, the c e vector for that element is going to be is going to be full. It's going to have as many entries as the number, as the nodes in the element, and likewise the d bar vector, right? For that case, all right, for that case what sort of a matrix is M bar? 'Kay. So. For a general element. Omega e, such that. The boundary of omega e intersection Dirichlet boundary is the empty set, right? So an element that does not have one of its surfaces coinciding with the Dirichlet boundary. For such an element we observe that the dimension of c e equals n n e, okay? Right? Right? Therefore, what that implies, that for such a, for such an element, what can you tell me about the M bar matrix? Okay, recall that the M bar matrix is now going to have this form. M 1 1 e up to M 1 n n e, e. M n n e, n n e, e, and here we will have M, n n e 1 e right, it should be obvious that this is a square matrix with, dimension n n e times n n e. What more can we say about it? If I have a general term here that is, M A B e and, we have a term here which is the. Through the transposed position in the matrix. What can we say about M A B, and M B A? Right? That's right. They're equal. All right, it just follows from the definition of one of those components, right? Each of them is just a mu, just the integral of n A times n B multiplied by rho integrated. All right, so they're symmetric, right? So basically we have M, M bar is a symmetric matrix. Okay. All right. One can also show using the properties of the of our basis functions that the M bar matrix is positive definite, okay? Right, and you recall what that means, right? So if we have, some general vector. Well actually, let well, it, it's true for M bar e, as well, but, but, hm. Yeah, that's okay. So M bar e is positive definite, so what we see is that, what that means as you will recall is that if we took any vector c, right, and we formed this product c transpose M bar e, sorry, if you too any c e transpose c e, right? And remember, c e can be arbitrary, right, could be any vector. This is, greater than or equal to 0 for all c e being in any dimensional vectors, right? In fact, it's equal to 0 only if c, the c e vector itself is equal to 0. Okay? This is what we mean by positive definite matrix, and indeed, M bar e is positive definite. Okay, all right, so this is one remark about properties of the matrix. There is another remark, which I am going to make, which is that M bar, as computed here, is what we call the consistent mass matrix. Okay. And. Okay. One can also do a lumping, okay, and, what a lumping involves is computing the consistent mass matrix and then summing up all the elements along a row, and putting that sum on the diagonal. Okay, so it's, it's, it's an approximation, it's not a, that's why it's not called a consistent matrix, right? So a lumped element mass matrix. Is the following, right, M e, say, tilde. Okay? Which is obtained as follows. On each diagonal, use sum over A. Sorry, sum over B M, in this case use sum over B, M 1 B e, okay? And put it there, and make the other 0. Okay. Likewise, you do that for all the diagonal elements. Okay. And, and every other element is 0. Okay. Right, which basically is telling you that M tilde e A B, right the A B component of the M tilde matrix, is simply sum over B. M A B e. Right. If, sorry. A equals B. If this is to be the case, sorry, I need to make a correction here. I need to have a new index for the sum. Okay, so if A equals B, which makes it a diagonal element, then you simply sum up over, you sum up the elements in that row. Okay? And put that, sum on the diagonal. Okay? And, it is equal to zero, otherwise. All right. It's just, it's just a sort of, quick and dirty way of diagonalizing the matrix, right. It's not a proper diagonalization through defining orthogonal basis functions or anything like that, right? It's just, just a way to simply lump it and definite a diagonalized matrix. And you also see why it's now called a lumped mat, element mass matrix, right? Because you're lumping all of the mass up at the, at the nodes. All right.