Okay, now you may recall how we did it in the 1D problem and take take that as a cue. All right, can you think of how to do it? What we need to do is recall. The mapping. Okay? So, that mapping was written as follows. Right? We said x. Right? The position vector of any point in the physical sub-domain. Right? It could be in reparameterized in terms of its, position on the parent sub-domain. And the way we caught that parametrization was by using the same basis functions. To expand, to, to, to interpolate, if you want to use that term, to interpolate the physical coordinates of the nodes. Where a now follows the local numbering. Okay? This is our map. Okay? Are now using so called coordinate notation, right? What as well. Right, what we know is x, sorry. Each x little i component, right? Can be parametrized by the full c vector. C1, c2, c3, and this is then just sum a going from 1 to number of nodes of the element, N A c x e, for element e but now component i of the x vector. All right? I've done, I've really written the same thing in both equations except that in the first case it's direct notation, the second case it is using coordinate notation. All right, and this is, this. Okay. From here just as we did in the 1D problem, we can go ahead and compute, partial of x i, with respect to c capital I. All right. Which is just, sum, over the nodes in the element. Now, N A comma I, capital I. X A, e, i, right? You note that there is a proliferation of indices here, right? There are superscripts and subscripts all over the place. The subscript for the element e is just coming along for the ride here, it's really not doing much for us right now. Okay? All right, so this is how we compute this derivative. Now of course this doesn't help us immediately because when, if you recall the form that the chain rule takes as shown here, what we need is actually the inverse of that created, right? Because we need the derivative of CI with respect to xI. Right? The thing marked with a question mark. All right, so how do we go about that? Okay? In order to do that, what we need to observe is that for the mapping that we have here, I'm going to, and I'm going to draw it here. We have our physical element. Our element in the physical domain. Okay, that one, right and this is element omega e. Right. Which we've obtained from this nice bi-unit domain. Right. Now we are calling these c1, c2, c3, all right? And what we've done, essentially, is to observe that for any point here in that domain, we can, given an arbitrary point c in the parent sub-domain, we actually have a map. All right? Okay? And that map is x of c. All right? Now, that is a vector map. Okay? In the context of looking at configurations especially if you have a background in continuum mechanics or some other field where you're looking at configurations and their mappings, this is what we call a point to point map. Okay, so this is a point to point vector map. It's almost superfluous to say vector there because there are, are representation points shown indeed as vectors. Right? We're using position vectors. Okay. So what that tells us is that we can com, compute the, the gradient of that map. Okay? The, the gradient of the map. Right, which is actually properly in the context of mapping configurations, the gradient of the map is also often called the tangent map. Okay? Okay. The tangent map is what I'm going to denote as a tensor, right? Because x is a vector, c is a vector, we're going to compute the gradient of x with respect to c. That gives us a tensor. Okay? So, J is. This derivative. Okay? This is what sometimes gets called The Jacobian of the map. All right? Now, this is direct notation, right, for this tangent map. We can also talk coordinate notation. So in coordinate notation, that tangent map. Is j, little i, capital I, is the derivative of the xi coordinate, x little i, with respect to c, capital I. Okay? Right? Now if you've studied continuum mechanics, you will recognize that to be something. Right, that it is essentially the deformation gradient from continuum mechanics. Okay? From the kinematics of continuum mechanics. Anyhow, we are not going to use that nomenclature. We just call it the Jacobian of the map. All right? Which is what it is mathematically. Okay. Now again, it's, it's, it's sort of the detail of a, a formal, or, or, or rigorous detail, to observe that, well, you can truly represent a tensor only if you have a basis. And if you have a basis, you can then represent tensors or squared matrices. Okay? That is actually a carefully constructed argument, but we don't to go into that argument, right? So we can represent it. As a matrix. Okay. All right? And truly, the fact that we represent it as a matrix comes from the fact that we have a, we have basis vectors in the physical domain as well as in the pairing domain, but we won't get into that detail. Okay. So, so J is simply, that matrix J is simply this. Right? All right. And, of course, there are terms here. All right. And you see this is just writing out what I had on the previous slide using coordinate notation. I'm writing it out explicitly here. Okay? So, we have this. Now, why should I bother with this? Because note that the map that we have is continuous, and it is smooth. It is actually what we call a c infinity map. Right? We are able to take an infinite number of derivatives of this map. Okay? So, the map, x of c from omega c to omega e is c infinity. Okay. All right, we can take, so actually a very smooth map. If it's a very smooth map and J is, a, partial of x with respect to c. All right, it turns out that, rigorously, its inverse exists. Okay? Okay, so there exists J inverse which is partial of c with respect to x. That is what J inverse is indeed by definition. All right? But that's easy to do now. We have G in front of us, we can compute that explicitly, because we do indeed have an explicit representation for each of those x1, xi's, x little i's with respect to each of the c capital I's. All right? And therefore, it's easy to compute J inverse. All right, so the code, so J inverse represents a partial of c with respect to x and, indeed we have J inverse is the matrix, the matrix representation of it, is partial of c1 with respect to x1. Partial of c2 with res, sorry. Z1 with respect to x2. Right. All the way to, to, to this last 3, 3 component. Okay, it's a 3 by 3, so that's not difficult to invert. It can actually be inverted exactly if we care to do that, okay? And we do indeed care to do that, right? But then you note what we've done, right? Essentially what we have is J inverse, right? We look at its components, capital I, little i now, okay. It, these components are indeed the terms we need or the factors we need in the application of the change rule. Okay? All right. So the key here is because of the fact that we are explicitly constructing this map, right, the vector to vector point map, X is a function of C, we can compute the tension map. It's just a 3 by 3, easy enough to handle, can be explicitly inverted. Right? In closed form, and the components of that inverted matrix are indeed the ones that we need for our chain rule. Okay, this is actually an excellent place to stop this segment.