So in this lecture, we're going to demonstrate the method of steepest ascent. The text material for this is section 11.2 in the book. And steepest ascent is a method that is very widely used in the early stages of response surface work for moving sequentially from an initial, let's call it guess of where we should be running the process towards the region of the optimum. And in almost all cases, steepest ascent is based on a fitted first-order model. And so steepest ascent, those those of you that have a background in a mathematical optimization, well, you'll recognize this as a gradient technique. This is a gradient-based method. And what do we know about gradient methods? Well, we know that they work really, really well when you're a long way away from the optimum. But as you get closer and closer to the optimum, they tend to slow down. So we would expect steepest ascent to be very effective in moving us away from an initial guess of our optimal conditions that maybe is not very good. So here's a pictorial representation of how steepest ascent works. Here we have a two variable system, and we fit this first-order model, and these are the contour lines of constant response from that model. Well, the method of steepest ascent moves in a direction that is normal to these contour lines. That would be the steepest possible path out of this region, where we did our original experiment, moving in a direction that is normal to the contour lines of constant response. How do we go about getting this approximating model? Usually what we do is we run an experiment that enables us to fit the first-order model. Now, this is the example from the book, where we have two variables, time and temperature, and we've run a two square factorial with five center runs. And this is the data in terms of the coded variables, which is what we're going to use for doing the analysis, and there's the response. So the first thing we need to do is we need to fit the model. And the first-order model that can be fit to this data using least squares is shown here. So there is our first-order model fit. And if you sort of get the idea of a gradient method, what this equation tells you is that we should move 0.775 units in the x1 direction for every 0.325 units in the x2 direction. That would be the gradient direction. But before we actually start moving along that path, you really ought to check the adequacy of the model. You ought to get an estimate of error, you ought to make sure that there's no curvature in the system, and that there's no interaction terms that need to be added to the model. So we can use the center points to get an estimate of error. We have five replicant runs, so we could get an estimate of error with four degrees of freedom, and sigma hat square turns out to be point 0.043. And the first-order model assumes that there's no interaction, that it's a purely additive effect. And so you could estimate the interaction term very easily and see whether the interaction term is statistically significant. And it turns out it's not, it turns out that the interaction term is not significant. And in fact, the F statistic for lack of fit here for interaction is very much less than 1. So there's no indication that we have any interaction. And then finally, we can check for curvature. And remember, the curvature test is done by comparing the average of the runs at the factorial part of the experiment to the average of the runs at the center. And the difference in those averages is shown here, and that turns out to be very, very small, -0.0335. And that, of course, is estimating the sum of the two pure quadratic terms. We can now calculate a single degree of freedom sum of squares for a lack of fit. And that's the quantity that you see here, sum of squares for pure quadratic. And the F statistic then for testing for no pure quadratic curvature is this sum of squares for pure quadratic divided by sigma hat squared. And that F ratio is also considerably less than 1. So again, no indication that there are quadratic terms required. And then the analysis of variance for this model is actually shown in table 11.2. And here is the ANOVA, and the model is significant, the F statistic is about almost 48, the p-value is 0.0002. And you can see that there is no significant interaction term, there is no significant pure quadratic term, these p-values are quite large. So there's no indication here that we have an inadequate model. So now what we have to do is we have to move away from some point in the design space along a path of steepest ascent. And usually we anchor the path of steepest ascent at the center, at the point 0, 0. So as I noted previously, we would move 0.775 units in the x1 direction for every 0.325 units in the x2 direction. And so the points on the path of steepest ascent pass through this point 0, 0, and they have a slope of 0.325 over 0.775. So the way we typically execute steepest ascent is we pick a step size in one of the variables. And here the engineering team decided to use 5 minutes of reaction time as the basic step size. And 5 minutes of reaction time in that natural variable corresponds to a step in the coded variable of delta x1 of 1 unit. So therefore, the steps along the path of steepest ascent would be delta x1 equal to 1 coded unit and delta x2 equal to 0.325 over 0.775, or 0.42 coded units. And it's very customary to pick the step size to be some step in maybe the most important variable. The variable has the largest model coefficient in coded units. And to make that step size something that is either really very convenient to do, or to make the step size exactly equal to 1 coded unit. And that puts the first point on the path of steepest ascent, right on the boundary of your experimental region. And this, in my view, has always been a good way to do this. Because that way that first point of the path of steepest ascent is kind of like a confirmation run. You would expect to get a response value there that is consistent with what you saw in the data from your original factorial experiment. So here is the steepest ascent experiment for this example. And you notice that we start at 0, 0, and 0, 0 corresponds to 35 minutes of reaction time and 155 degrees. 1 coded unit delta x1 is equivalent to 5 minutes of reaction time, and 0.42 coded units in x2 is equivalent to 2 degrees. So we're going to perform all our points on the path of steepest ascent by successively adding those increments to the starting point. And then at at least some of those points, we're going to actually run experiments to see what the response is. So the first point would be 40 and 157. We run that experiment, and the value that we observe there is 41. Now, if you go back and look at the experimental data, you'll notice that all of the responses were in the, roughly in the 40.5, 40.7, 40.6 kind of range. So getting a value of 41 here is very encouraging, that is really confirming what we saw in the original experiment. So now we take another step, we go to 45 minutes and 159 degrees, we run the experiment again, we get 42.9, another improvement. So we would keep making these steps along the path of steepest ascent as long as we continue to see improvement. And you notice that you get steady Improvement as we go [COUGH] to the origin plus 3 delta, plus 5 delta plus 7 delta, until we get out to about here, we get out to about, The base point plus 10 delta. [COUGH] We're at 80.3, this is a big Improvement, we've essentially doubled the response. But as soon as we take another step, the response goes down. It goes down to 76.2, fairly sizable drop. Well, maybe that's just noise. Maybe if we take another step, things will continue to improve. So we take another step, and again, we get a degradation in the response. There's always some question as to when you should stop doing steepest ascent. I think that stopping as soon as you get one value that goes in the opposite direction might be a little bit too conservative. I think if you go two steps and get two steps in a row where the response moves in the opposite direction from what you're looking for, that's probably a good indication that you should stop. And somewhere around this best point that you found, that's where you probably ought to think about perhaps running another experiment. This is a graphical display of the path of steepest ascent. And what's happened is we've moved along this path. And you notice that we have steady, steady increase up and up until we get to this area that we just saw on the previous chart. So what should we do now? Well, I think probably a reasonable thing to do is to take, let's say the best point that we found so far and run another experiment to see if we can fit a first-order model. And so that's what's done here, the experimenters have done another first-order experiment. The reason of experimentation on time is 80 to 90 minutes, and for temperature it's 170 to 180. And that produces the design that you see here, this is another two square with five center runs, and there are the coded variables. And I've also shown you the relationships between the coded variables and the natural variables. Let's fit the model. The fitted model turns out to be the equation that you see down at the bottom of the slide. Both coefficients for x1 and x2 seem to be fairly small, but they are statistically significant. So let's take a look at the overall model ANOVA. So this slide shows you the analysis of variance for this second first-order model. And what I really paid attention to here is the F statistic for interaction and for pure quadratic, the test for adequacy of the first-order model. And the p-value for the interaction term is just less than 0.1, so interaction would be significant at the 0.1 level. But look at that pure quadratic curvature p-value, 0.0001, so there's a strong indication of pure quadratic curvature here. So now what would we do? Well, we're at a place where there's curvature in the system. There's not any indication that our first-order model is going to give us any more immediate improvement because of that curvature. So it might be time to abandon steepest ascent and start fitting a second-order model and doing the complete response surface optimization analysis. Basically remember that points on your path of steepest ascent are proportional to the magnitudes of the model regression coefficients, it's a gradient procedure. And the direction that you move in depends on the sign of the regression coefficients and what your objective is. Here is a simple step-by-step procedure for executing steepest ascent. Step one, choose a step size in one of your process variables, say, delta xj. And typically we select the variable that we know the most about, or the one that is the largest regression coefficient of absolute value. And we typically make that step size something that is easy to change. Then you can find the step size in all of the other variables by using the equation you see here. Take the regression coefficient for any other variable, divide it by the regression coefficient for the variable that you've chosen the step size for, divide it by delta xj. And I think it's best to make the step in the natural variable the size that gives you delta xj equal to 1. I think making delta xj equal to 1 is always a good idea. And then to actually execute the experiment, all you have to do is convert the the delta x's from coded variables back to the natural variables, and then you can run the experiment.