Okay, so now we've reached the point in our exploration of response surface methods where we found a situation where we've got curvature and we need to fit a second-order model now and proceed through the optimization analysis. So this is usually done with the second order model that you see here. This is the general form of the second order model for K variables. And you notice that it has the linear terms or the main effects of the variables, it has the pure quadratic terms and then it has all of the two-way interactions. Now, these models are very, very widely used in practice for doing the optimization phase of RSM. And part of the logic behind that of course, is the Taylor series analogy. Remember the notion of a Taylor series, which you studied in calculus the Taylor series can be used to approximate complex functions. Well, first order Taylor series is analogous to a first-order regression model. And a second order Taylor series is analogous to the second order model that you see here. And they're very, very few situations that you ever encounter where any sort of Taylor series approximation beyond second order is ever really needed. So our expectation is that a second-order model should work pretty well. Fit in the model by the way is pretty easy, it's still a linear regression model. And it turns out there are some very nice efficient experimental designs available that we can use to fit this model. Optimization is easy, the numerical optimization for this model is quite straightforward, but there's also a great deal of empirical evidence that says that these models work really, really well. One of the references in the textbook is to a couple of review papers on response surface methods that have been published over the last 20, 25 years. And these review papers talk about applications in all sorts of different fields. And I've been an author of one of those review papers, and going through literally hundreds of applications papers from many, many different fields of engineering and science and business. You just don't find people using response surface optimization with models that are higher than second order. So what is the response surface for a second-order model look like? Well, I'm going to show you some of the, let's call him the standard shapes. This is a response surface plot, an corresponding contour plot of a response surface that has a maximum. And so you notice that there's this curved shape that has a point somewhere out in here that is the apparent maximum of that function. You can also find the minimum. This is kind of the bowl shape appearance in somewhere in here. There is a point that gives us a minimum response. And then we often run into systems that exhibit neither a maximum nor minimum behavior. Their saddle point systems or sometimes we call them Minimax systems. And so in a saddle point system, if you move in one direction you appear to be going up a hill in this case. If we go this way, we're going up a hill passing over a maximum going down again. Whereas if we're going in this direction, we're going down into a valley and then going back up again. So they're in this case that saddle point is neither a max nor a meant it's simply an inflection point on the surface. Part of a response surface optimization study is characterizing the surface, finding out where the stationary point is and then determining what kind of surface do we have. Is it a max? Is it a min? Is it a saddle point? And what have we found? And there are a couple of ways to do that, when we don't have very many process variables graphical methods are extremely useful. But there's also a more formal mathematical analysis used called the Canonical Analysis that can be quite useful in locating the optimum. And also helping you determine the sensitivity of the response variables to the optimum value. That is as you move away from that optimum, which response variables exhibit the greatest sensitivity in that area. And canonical analysis is very useful for that. How do you find the stationary point? Well, the stationary point is straightforward, simply take the partial derivatives of your second order model with respect to each one of the x's. Set those partial derivatives to 0 that will give you a set of K equations in KN knowns solve those equations and that will give you the stationary point. Now, the stationary point could be a point of maximum response. It could be a point of minimum response or it could be that saddle point that we've talked about it, and we need to know exactly what it is. [COUGH] Algebraically, this is the equation that represents the solution to those derivatives that gives us a stationary point. X sub s is the vector of coordinates of the stationary point and it's always equal to minus 1/2 times the inverse of the B matrix, capital B times vector little b. [COUGH] Now, vector little B is just a k by 1 vector of your first order regression coefficients, beta 1, beta 2 on down the beta hat sub k. And the B matrix is a symmetric k by k Matrix that has the pure quadratic model coefficients on the main diagonal. And then all of these off diagonals are the interaction coefficients divided by 2. These first row is these are all of the regression coefficients interaction regression coefficients involving x1. And the next row would be all of regression coefficients representing interaction terms involving x2, and this is a symmetric matrix. So it looks exactly the same below that main diagonal as it does above the main diagonal. So easy to find the stationary point, all we have to do is invert that b matrix. The canonical analysis requires a little bit more arithmetic and it's used for doing a couple of things helping us identify the form of the stationary point and for doing sensitivity analysis. Based on the analysis of this state, this canonical model, we can tell something about what the stationary point represents. And basically this analysis involves transforming your original model in terms of a coded variables into a canonical form. And the canonical form of the model is shown at the bottom of this slide. Y is the response and it's equal to y at the stationary point plus a constant Lambda 1 times w1 square plus WL plus a constant Lambda 2 times w2 square plus all the way on out to another constant Lambda sub k times w some k squared. Now, what are these ws and what are these lambdas, okay? Well, the ws are the so-called canonical variables, the canonical variables. And the way those canonical variables are found is illustrated on this slide. What we basically do is we take our variables x, those are the design factors and we move to a new coordinate system that is centered at the stationary point. So this is our stationary point and x10 and x20 are the coordinates the stationary point. Then what the canonical analysis does is it rotates those axis so that they are now parallel to the principal axis of the contour system. So that's these new variables w1 and w2. And then the lambdas are just you can think of them as just regression coefficients that multiply those canonical variables. And I'll show you where the lambdas come from in just a moment. But you can see that this really greatly aids your ability to interpret this model, because y is equal to y sub s plus lambda 1 times w1 square plus lambda 2 times w2 square. So any movement along either the w1 or w2 axis is going to be predicted by this equation. And so we should be able to tell something about what the response surface is doing by looking at the signs of these lambdas. So what are the lambdas? Well, the lambdas in that canonical form turn out to be the eigenvalues of the b matrix or the b hat matrix. And so, It's easy, once we have those things computed to see exactly what we've got. If all of your lambdas, these should be lambdas if they're positive then you found the minimum. And the reason for that is pretty easy to see because if these are positive and you move any direction away from w1, w2 equal to 0, the response is going to go up, it's going to increase. And so lambdas all positive indicate that you have a minimum. Lambdas all negative on the other hand indicate that you found a maximum. And again, if these lambdas are negative if you change the ws at all the way from 0 the predicted response is going to go down. And if the lambdas are mixed in sign, then you've got a saddle point, some are positive, some are negative, then it's a saddle system. Eigenvalues are basically used to determine this not only the nature of the surface min, max, saddle point, but they also tell you something about the sensitivity of the response with respect to the design factors. The response surface is steepest in the canonical direction, which has the largest absolute eigenvalue. So look at the magnitude of the absolute values of the eigenvalues. And the largest eigenvalue and absolute value indicates the canonical direction in which the response surface is the steepest, if most sensitivity to changes in that direction.