[MUSIC] We looked at iterative methods of solving linear systems and we discussed three methods based on slightly different splittings of the matrix into the diagonal and off diagonal parts. If this is your lower triangular part, then there is a diagonal and upper triangular part of the matrix, this is L, D and U. Then Jacobi iteration separates the diagonal part. Seidel's iteration separates the diagonal and lower triangular part of the matrix, and successive over relaxation adds to Seidel's iteration and arbitrary parameter omega. And then everything is omega dependent. So obviously, this is not the only way of exploiting my matrix. And very clearly, I can invent an infinite number of ad hogs splitting of the matrix and I'll have a whole zoo of algorithms and I will need to analyze them one by one. And there will be no general guiding principle on which splitting is good or bad in a given situation. So this is really an ad hoc math. Those bring some order. Let's discuss canonic ways of writing this iterative methods. Now we can find the discuss to single step methods. Where a current iteration only depends on the previous step, so that's why single step methods. And I will write the canonic form using two parameters. I will define this tao which can stand dependent. Which is a scalar, and then an arbitrary matrix P, which is again, can be step dependent. Why I'm I writing in this form? Well, primarily because I can. And one nice thing about this canonic form is by just looking at the equation, it's sort of clear that if these iterations converge, if the distance between the current iterate and the previous one goes to zero, then it convergences through the solution of the original system. Let's find you some definitions. First, I can have my tau parameter and my P matrix to be step dependent. And then if the P matrix is constant and tau is constant, they're step independent then the method is called stationary otherwise it's non stationary. And if the P is a unit matrix then the method is explicit, otherwise it's called implicit. Right, for an explicit method the iteration is reasonably simple. The current iterate is the previous one plus tao times the residual. This difference between the right hand side and the left hand side of the linear system of the residual. So then it's reasonably straight forward. For an implicit method on the other hand, at each step I need to solve an auxiliary system of equations given by this P matrix. And an obvious requirement is that this auxiliary system must be in some sense simpler to solve than the original one. Now, how does it work in practice? In practice, you first compute the residual. If you have the current iteration that's trivial, you just compute it, then you solve this auxiliary system for. Auxiliary value is that will be the right hand side being residual and left hand side matrix being this P. And then once you've done this, you have the next iteration. So this works and of course, the matter hinges on the form of the P matrix. So this auxiliary system must be in some sense simple. Right. Let's make contact with what we had previously. Let's rewrite this Jacobi, Seidel, and successive over relaxation algorithms in this canonic form. For Jacobi it's reasonably simple. So we have this Jacobi iteration which has the diagonal part and off diagonal part of the matrix, and off diagonal part only talks to the previous iterate. We and subtract a part which is necessary to complete the matrix A. And then we have D + L + U being multiplied by the previous iterative, that's A. And we essentially have the second term in the left hand side of the canonic form. Eight times the previous iterate. And then what's left is D times the difference between the current iterate and previous one. So in the end of the day, we have this P matrix being the diagonal part of A and tau is just 1. For Seidel's iteration, things are similar. Again, we add and subtract the part needed to complete the matrix. What we have is that the P matrix is the sum of the diagonal lower triangle parts of A, and tau is again one. For S.O.R, we play the same game and what we get tat the end of it, is that P is diagonal part plus omega times L. So everything is omega dependent. And then tau will be in this omega. So that's an example of where tau is not one. Now it is that all these three methods are stationary. We so far we haven't used the freedom of choosing the style parameter in standard method. We can, but we haven't so far. Right. So we have this canonic form, we understand how the ad hoc splittings can be phrased in this canonic form. Let's briefly discuss convergence properties. That's one of the nice things of a general framework that you can prove things once, or you can discuss convergence just once. Right, the first thing is, like I already mentioned, that if this canonic form of iterations converge, then they converge to the solution of the original system. And then to analyze the convergence further, it's useful to define the distance to the solution, that's error or deviation. And then the iterations can be written in terms of the transition matrix S, which relates the current divide in the previous one. And this transition matrix is expressed in terms of this P matrix the transition matrix of the iteration and the original matrix A. So everything, well not everything, but convergence depends on the properties of S. Or more specifically, on the spectral properties of S. Again, I will only state the theorem. I will not prove it. So placed if the iteration is stationary. So that the transition matrix is iteration independent, then I need to require or the algorithm converges if and only if the spectral radius of this F, of this S is less than one. Okay, you can prove this. This is tedious, even if reasonably straightforward. But then of course the question is what is this S and how do I choose it to optimize convergence. To what this mean, to optimize convergence. I want the convergence to be fast, so I want to get a predefined accuracy in the minimum number of steps. And to engineer a method which converges in a best way, I can tune my tau. I can select several ways or certain ways of selecting my P matrix. I will only sketch things, that will just give you a flavor of how you can utilize this canonic form of iteration [MUSIC]