So here's the first one, a regression model. Now, what a regression model does is work on data. So it's not deterministic, it's based on a set of data, and we use that data to reverse engineer a realistic description of a process. And so here's a regression model being applied to a set of data, that is capturing the price of a diamond and its weight. So if you have a look in the graph on this slide you will see along the x-axis you have carats, in other words weight, and on the vertical y-axis you have the price of a diamond. And each of those dots on the graph Is actually a diamond that is being weighed and priced. And you can see that these prices and weights are falling on something that looks approximately like a straight line, it's approximately linear. And what a regression model does is take the data as an input, and find the best fitting line, in this instance, to the data. And I've written down the formula for the best fitting line. And that best fitting line is the blue line that you can see superimposed on the graphic. And around the blue line I've plotted a gray band. And that gray band is termed a prediction interval. And this is the key difference between a probabilistic and a deterministic model. And that by using this probabilistic model, we're going to get measures of uncertainty of the outputs. And you can use the gray band there to create a prediction interval for what we term, a new observation. So if you came to me with a diamond that had come out of the same population that this regression was run against. Let's say you come to me with a diamond that weighs 0.25 of a carat. Then I can use this graph to predict the price of that diamond and furthermore, I can use the gray bands around the graph to give a prediction interval that captures the range of uncertainty. And clearly you want to be able to do that, because when you look at the points, they don't lie exactly on the straight line. They're pretty close, but they're not exactly on it, so there's some noise in the system, and we're able to measure that noise, and incorporate it in our prediction interval and forecast. So that's what a regression model does for you. And as I said before, this is certainly one of the techniques that is most frequently used in business analytics. So to summarize, regression models use data, and they use that data to estimate the relationship between the mean, or the average value of an outcome, let's call that Y, and a predictor variable X. So going back to the diamonds example, what our regression model is going to do is give us the expected price of a diamond for any given weight. The intrinsic variation in the raw data, and by that I mean those points were not lying along a straight line exactly, is incorporated in to the forecasts. It's propagated through the regression model. And we are then able to create a prediction interval for our forecast, rather than a single best guess. And the basic idea behind these prediction intervals is that the less noise that there is in the underlying data, then the more precise the forecast and the regression are going to be. So a lot of the activity in regression modeling involves trying to find a model which best explains the data. So, we like to create a model for which there's very, very little noise left around that model. And if there's very little noise left around the model, then our prediction intervals are going to be narrow, which is what we like. So that's a very brief discussion on regression models.