In the last lesson, you learned that we could simplify the general weighted total least squares problem to a specific case, where the standard deviations on the errors of the x variable and the y variable are assumed to be proportional to each other. When we did this, it lead to a solution that could be found using a very simple quadratic equation. Here, I have reproduced that equation for you, for convenient reference. In order to implement this method of total capacity estimation, we must simply keep track of the recursive quantities; C1, C2 and C3, and then update the solution using this quadratic equation whenever a new data pair becomes available. Of course because we're thinking about a recursive method, we also need a method to initialize the recursive parameter values. The good news is, we can do so using exactly the same method that we looked at for the weighted least square solution by creating a synthetic initial measurement, x0 equal to one, which we're performing an imaginary experiment where we've completely discharged battery cell from full capacity down to zero, and then setting y0 equal to the nominal capacity of this cell provided by the manufacturer which would be the amount of Ampere hours discharged during that process. Then we set the variance of y equal to a value that represents the uncertainty of true total capacity of a particular cell versus the nominal capacity that is specified by the manufacturer at the beginning of life. So that is, we set C1 equal to one divided by the variance of y, we set C2 equal to the nominal capacity divided by the variance of y and we set C3 equal to the nominal capacity squared divided by the variance of y. So now we can initialize the algorithm and we can execute the algorithm step by step. Remember, from last week we also talked about computing confidence intervals for our estimators and to do this, we needed something called the Hessian of the cost function which is the second derivative of the cost function with respect to the optimizing variable q hat. Here I share with you the the equation of the Hessian for this particular cost function, and notice that it's written in terms of the recursive parameter values that we already have computed and so it's actually very easy to compute this Hessian on a microcontroller. After we know the Hessian, we can compute the one-sigma bounds as the square root of two divided by the Hessian, where the Hessian is evaluated at a estimate of the total capacity. Last week we also discussed a fading memory concept where we place more emphasis on recent measurements and less emphasis distant past measurements in order to allow our estimate of total capacity to adapt more readily over time as this cell ages. We can add this fading memory to the total least squares cost function in exactly the same way that you saw how to do it for the weighted least squares and the way to total least squares methods last week. When we go through all the math and find the solution, it turns out to be this quadratic equation here using this new set of recursive parameters. The major difference between the fading memory solution and the standard solution is the insertion of this fading memory factor gamma and each one of the recursive calculations, where remember that gamma is a number between zero and one, and less than or equal to one and usually quite close to the value of one. We initialize the fading memory solution in exactly the same way that we initialize the standard recursive solution. We can compute the Hessian of the fading memory solution in the same way as we did as before as well. So, to summarize this very quick lesson, this total least squares cost function that we've seen so far this week shares the nice properties of the weighted least square solution. It gives us a closed form solution for q hat which we desperately desire. This means that we don't require iteration every timestep such as a Newton-Raphson search in order to find the solution. Second, the method gives us a result that can be computed very easily in a recursive manner. We simply need to update three running summations, C1, C2 and C3 every time a new data pair becomes available. So that is, whenever we get a new data point, difference in state of charge x and accumulated ampere hours, y, we update the running summations and we compute an updated total capacity estimate. Third, fading memory is easily added to the solution and this fading memory is also recursive and can be computed efficiently on an embedded processor like a battery management system using a finite fixed amount of memory for example. It doesn't need to grow like the weighted total least square solution did. So we have a fantastic method but it is maybe a little bit too restrictive. It does not allow for the uncertainties of the x and y variables to be arbitrary. It fixes those uncertainties to be proportional to each other as you've already seen. So, our next task is to look for an approximate total least squares solution that retains the nice computational properties of the total least square solution but allows for a more arbitrary relationship between the uncertainties of x and y. That's what we'll proceed to do for the next lessons.