The primary result of the topics you learned about last week was the derivation of the linear Kalman filter. You learned how each one of the steps was derived from first principles and you'll also gain some insight into what each one of these steps was doing. This week we continue with the linear Kalman filter and our primary objective is to gain more experience with this filter, to learn how it operates, and to be able to implement the filter in Octave and evaluate results. In order to get started, I review the steps of the Kalman filter in a graphical form on this slide. There is really no new information on this picture but I believe that this organization can be helpful to understanding how the Kalman filter will be implemented in code. So, first on the top left part of the figure, we initialize the initial state and the initial covariance matrix. We provide the filter with an estimate of the state at time zero and the uncertainty of that estimate also at times zero and we jump into step 1a. At this point, we have enough information available to compute the state prediction for time one based on the state estimate at time zero and the known input at time zero. After doing that we move on to step 1b. We compute the error covariance matrix for the prediction of the state and we go on to step 1c. We compute an estimate of the output including a contribution that has to do with the measured input at the present time. So, this completes all of the prediction sub-steps and if you look at them you find that we do have all the information we need just in time to make all of the computations that we need to do. Then we go on to step 2a. We compute the Kalman gain matrix. We go on to step 2b, we use the measurement of the system output and the prediction of the state and the innovation and the gain matrix in order to produce the state estimate. Finally, we go to step 2c. Here, we compute the covariance matrix of the state estimation error. Everything I have described has been done for time step k equals one. So now what we do is we wait. We do nothing until the next sample interval and we increment k so that k is now equal to two. For time k equals two, we repeat all three sub-steps of step one and we repeat all three sub-steps of step two and we increment k one again. Once again, so k is now equal to three and we do this again and we do this over and over and over again until it's time to turn off the Kalman filter. I claim that these steps are quite simple to implement on a digital computer, especially if we happen to have some libraries or code available to us that is able to compute the matrix multiplications for us. But even if this library doesn't exist, it's not that difficult but maybe a little tedious to develop our own code for computing matrix multiplies and so forth. Later on this week, I will share with you an implementation of a Kalman filter in Octave code, and I think at that point in time you will agree that it's actually pretty straightforward to implement a Kalman filter that way. To gain insight and for practical purposes, we'd also like to be able to apply the Kalman filter to the problem of estimating state of charge of a battery cell. In order to do that, we first require a model of a battery cell and the great news is we already have that model. We spent a lot of time in the last course, the second course of this specialization, developing a model that describes how a battery cell works. The bad news is that this model is nonlinear and so we cannot directly apply this nonlinear Kalman filter to the model that we developed in the second course directly. Next week we will solve that. Next week, we will look at a different form of Kalman filter known as the extended Kalman filter that is able to work with the battery model we developed in the second course and then the following week we will develop yet another type of Kalman filter known as the sigma point Kalman filter that also can use that battery model. But there's a lot to be gained by looking at the linear Kalman filter itself in understanding how it works and so forth. So, we would really like to implement some kind of state of charge estimator, and to do that we're going to develop a very very crude simple battery cell model to help us understand the Kalman filter better this week. So, to demonstrate the Kalman filter steps, we will first develop and then use a very simple but not especially accurate cell model, and this model is described by the set of equations on this slide. The first equation is the state equation of this simplified model and it should look very familiar. It's the state of charge equation that you already know about from the enhanced self-correcting cell model that you learned about in the previous course. So this equation models the next state of charge equal to one times the present state of charge, minus the present input current scaled by capacity and scaled by a factor that turns Ampere hours into Ampere seconds so that the units work out correctly. In this really simple cell model, we've ignored Coulombic efficiency, although it would be pretty straightforward to add it back in, and we also don't have any states to describe diffusion voltages or hysteresis. The second equation of this simplified model is the output equation of the model and that computes an estimate of cell voltage. This crude approximation of voltage relies on understanding that really major part of cell voltage is open-circuit voltage and that's a nonlinear function of state of charge. I've plotted some example, open-circuit voltage versus state of charge curves and the figure on this slide, and you can see that for these different lithium ion chemistries, the relationships are somewhat different from each other but they can be approximated not very well but at least partially by the dotted black line that you can see there which is a straight line. So, when we study nonlinear Kalman filters in future weeks, you will see how to use the actual nonlinear OCV relationship instead of this, but for now we're going to use the straight line approximate OCV for the examples that we look at here. You can verify pretty easily that the equation of this black dash line is equal to 3.5 plus 0.7 times the state of charge. If you look at the voltage equation of this model, you can see now why everything is there. So, the voltage is computed as 3.5 plus 0.7 times state of charge, minus the ohmic voltage drop across the cell which is R0 times the input current. So, we have really really simplified the cell model from the one that you saw in the previous course. We have linearized the OCV relationship, we've omitted any description of diffusion voltages and diffusion resistor currents, and we've also omitted any description of hysteresis. But now we have an equation that we can use with Kalman filters. Actually, we don't. It's not a linear model yet. It looks linear but it isn't. So, let's think about why. The model is not yet linear because the output equation has a constant value of 3.5 in it. The form for our linear state space model says that the output equation must be equal to some c matrix times the state, plus some d matrix times the input. There is no provision for some constant like 3.5 in the output equation. So, in fact, our output equation is not linear; it's nonlinear. Technically, it's affine. It's a straight line, but it's a straight line that does not pass through zero and so does not meet the requirements for a linear dynamic system. So, what do we do? What we're going to do is we are going to make a synthetic measurement by taking the actual physical measurement and subtracting 3.5 from it, and so y_k is a synthetic or recomputed measurement based on the actual voltage measurement from the battery cell, and then we use y_k in the model instead of cell voltage, and now you can see that the output equation of the model is y_k is 0.7 times the state minus R_0 times i and this is a perfectly valid linear state-space equation form. So, we've fixed our problem. To be clear, this is a state-space model where the state x is what we've called z here and the input u as what we've called i here. For the sake of example, I'm going to use some simplified constants. So, the value of Q will be 10,000 over 3,600 which is about two amp hours. It's a pretty reasonable number and it makes the constant multiplying input current and the state equation equal to negative 1 over 10,000 which is a little bit nicer to write than some other possibilities. I also choose the output resistance to be equal to 10 milliohms which makes the numbers a little bit nicer to. So, that gives us an overall state description where the a matrix is 1 because new state equals 1 times prior state. The B matrix is negative 1 times 10 to the negative 4 because the new state has this negative 1 times 10 to the minus 4 times the input current component to it. The C matrix is 0.7 because the measurement or the synthetic measurement is 0.7 times the state plus something else, and the D matrix is negative 0.01 because the synthetic measurement is negative 0.01 times the input current plus other things. So, those are the state-space A, B, C, D matrices. We also need to choose covariance matrices for the process noise and sensor noise and here I choose these values. Finally, we need to choose the initial state and its uncertainty and here I'm going to set the initial state exactly equal to 0.5 and I will set the estimate also equal to 0.5 meaning that I have perfect knowledge of the state and so my covariance is 0 in this example. On the next few slides, we will work through the numeric details of the Kalman Filter by hand so that you can see it operating. This is really tedious to do by hand but I think it really helps with understanding, and when we finish with this example, you are just going to be just so happy to let the computer do all of these calculations itself instead of you doing them. But I also believe that even though it's tedious that it's of great value especially if you find the equations of the Kalman filter a little bit mysterious, by working through the equations step by step, you can see, oh, I put this number in here and I put that number in here and I compute this value and this value is used in the next equation and you can keep on going through and you can see how all of the steps of the filter work together and how we compute the sum values just in time to be used by the next steps and so forth. So, I'm not going to go through all of the steps and just narrate them for you. I've provided this steps to you on this slide hoping that you will take some time with a piece of paper and a pencil and that you will go through and you will check my math to make sure that everything is right and that you understand how it's done, but let me explain a little bit what's happening. So, the Kalman filter every iteration has three prediction steps, those are shaded blue. Three correction steps, those are shaded green. On the left side of each one of these boxes, I put the actual equation that we are implementing. On the right side, I put the numeric values when I substitute everything in. So, doing the first one for example, on the left side it says, the prediction at time 1 is equal to a times 0 multiplying the estimate at times 0 plus b at time 0 multiplying the input at time 0. On the right-hand side, we translate that by putting in actual numbers, and you say, okay, the prediction at time 1 is a 1 times the estimate at time 0, 0.50 plus b at time 0, negative 10 to the negative 4 times the input at time 0, 1. We compute that and we find that the value is 0.4999. Keep on doing that step after step after step, and we compute all of the quantities required in this iteration of the Kalman Filter. At the end of iteration number 1, there are two outputs that we care about. One is the state of charge estimate from step two B the other one is the error bounds on that estimate from step two C, and this is how we combine them. We take x hat one plus 49.99 percent, and the covariance that 3 times the square root of 9.9995 times 10 the minus 6. Overall, the output of our algorithm at time 1, is that the state of charge our best guess is 49.99 percent plus or minus 0.95 percent. I include additional steps here, one more step so that you can work through these on your own. Again, I'm not going to narrate all of the details for these steps, but I think there's some value in doing at least two steps on your own to see how it works and again the output of the second iteration says that the state of charge is equal to 49.98 percent plus or minus 1.3 percent at this point in time. If we were to keep going with this example, we get very tired, very quickly. We would notice an interesting result. The uncertainty of the state prediction converges to a constant value over time and the uncertainty of the state estimate also converges to a constant value over time. They converge to different values. The covariance or the uncertainty increases during the prediction step because we have not made any measurements recently that might help us and then the uncertainty has reduced when I make a measurement and I make an update based on that. So, the covariance or uncertainty of the prediction is always higher than the uncertainty of the estimates. So, in steady state, the covariance oscillates between a prediction covariance steady-state value and an estimate covariance steady-state value. Again, the estimation error bounds are computed as plus or minus 3 times the square root of this covariance for a 99 percent assurance of the estimates accuracy. On this slide, I'm going to share with you some figures that demonstrate an example of the Kalman filter operating, and soon you are going to learn how to write Octave code to evaluate and implement this example, but for now I want to focus on understanding what is happening. This is code I implemented in the Octave and I will share it with you later on. So, let's look at the figure on the left first. The solid blue line is the actual true state of the battery cell versus time. The green dotted line is the Kalman filter estimate of the state versus time, and notice that the filter estimate does not converge to the true value. The error does not go to 0 and this is going to be true for any kind of Kalman filter and this is because the actual system is constantly being excited or exercise by process noise and the measurements we make are going to constantly have this additive sensor noise on them. So, it's just fundamentally not possible for the filter to converge exactly to the state. So, instead, we hope to converge to a region that's close to the state of the system, and this is illustrated by the red dashed dot line in the same figure which is the estimation error. Remember, that error is equal to true minus estimate. So you can see that the estimation error is not diverging but nor is it converging to 0, instead it's sort of randomly evolving over time within some neighborhood of 0 and that's exactly what we would expect from a Kalman filter operating. So, now let's look on the figure on the right and you can see the prediction error covariance and the estimation error covariance plotted versus time, and as expected, at every point in time, the prediction uncertainty is bigger than the estimation uncertainty, and you can see that over time both of these converge to steady-state values, and these covariances can be used to provide error bounds on the filters estimate. If you study this in detail, you'll notice that the error bounds are really quite large. This is because the value that we chose for this sensor noise covariance itself is quite large. In a real application you would hope that your sensor noise would be far, far smaller than what we're adding here, and using a nonlinear Kalman filter for state of charge estimation and give us some really good results very often in practice. But even for this example, even with the large sensor noise that I've added, we know for certain that this is the best that we can do because we know that the Kalman filter is the minimum mean squared error estimator of the state, and given the measurements that we've made, given the noises that exist, this is the best possible estimate that we could have. So, to summarize this lesson, remember that the Kalman Filter implements a minimum mean squared error optimal state estimator for linear systems if certain assumptions regarding the noises and the systems are met. The Kalman filter equations that we developed last week naturally form a recursive algorithm for estimating the value of this state, and you've seen that even though the Kalman filter equations work only for a linear system, we can sort of apply them to a nonlinear system by linearizing the system dynamics in order to get an approximate result. So we did this in a pretty simple way in this lesson to demonstrate a few things, but in the next two weeks, you'll learn better ways of doing the linearization for much better results. You saw examples of using the simplified battery model that demonstrates the kinds of results that you could expect from a Kalman filter. You saw that a Kalman filter or produces an estimate of the state which is exactly what we want. You also saw that the Kalman Filter produces uncertainty bounds, or confidence bounds, or error bounds on that estimate which is a big advantage. We can use these error bounds or confidence bounds on the state of charge estimate produced by the filter when we use those estimates and other algorithms to compute power limits and the the energy in the battery pack and so forth as you will learn about in the fifth course in the specialization. So, that concludes this lesson. Let's move on to other ways of visualizing the Kalman filter.