Forecast accuracy. How do you know how good a forecast is? Well you have to measure its accuracy. And you can measure how far away it is from the actual demand. Which is defined as accuracy. But you also have to consider bias. And what bias means is, do you have a tendency to over forecast or under forecast? You don't want to be too biased in either direction. Because that degrades the ability to properly forecast the future just as much as accuracy does. The simplest form of a forecast accuracy measure is the mean error. We take our demand, we subtract the forecast from it. And then all we need to do is average up all of these time periods that we have forecasted so far. And that gives us our mean error. Both over forecasting and under forecasting are bad. If you over forecast, that means you have more product than you really needed, so you're going to have stuff leftover. If you under forecast, you're not going to have enough product. And customers are going to come to your store, not have anything to buy, and they're going to be angry. Next we have the mean absolute percent error. And unlike the mean error, which was more of a measure of bias. We are going to actually get that accuracy. So we have our demand minus our forecast. And the problem is that we are trying to compare across products. So we have to divide by demand to get a percentage. And we're going to take the absolute value of that. So that we don't have pluses and minuses canceling each other out. And then all we need to do is take the sum over that and divide by how many periods we have. And there we have it, MAPE, mean absolute percent error. A very important forecasting accuracy measure is the mean squared error, or MSE. What we're trying to achieve with that is that we're trying to give more weight to large errors. Large errors are the ones we want to avoid at all costs, because small errors we can plan for. Large errors are going to surprise us and make our life and planning much more difficult. So our demand minus our forecast actually will get squared and then we take the average of that. So the MSE is squared errors. And we take the average over all of the forecasted periods, mean squared error. Because we're squaring the error term, what happens when we have a large error, it becomes much larger. Because we multiply it by itself. Small errors remain small, but large errors become huge. And those huge errors are going to significantly affect our mean squared error. Therefore we will be much more sensitive to those large errors. So which forecasting accuracy measures should we look at? The short answer is all of them. Each one has something different to tell us. And we should consider each one for different reasons. In the end, we want to pick the best forecast. And only if we look at each forecast that we consider from various measures of accuracy. We'll be able to obtain the best forecast possible.