So we've just finished talking about optimal designs, and situations where they might be useful. Here's an example of such a situation. This is the example in the book that I referred to as the pull-off force experiment. There are two variables here that we are looking at, and it has to do with an adhesive material that is used in bonding a device to a substrate. There are two variables of interest in this study; one is the amount of adhesive applied and the other is the cure temperature. The reason that constraints come about is that if we don't put enough adhesive on the components and the temperature is too low, then we don't get any adhesion at all, and that chops off this corner of the experimental region. If we have too much adhesive and the temperature gets too high, then the volatiles in the adhesive essentially boil off very quickly, and we don't get any adhesion either. So there's a chopped corner off of each end of this region, and so a standard design really doesn't work. But something that an experimenter might be tempted to do would be to try to force a standard design into this constraint region, and that's what I've done. Look at this, this is a standard design. There are four points at the corners of a square, and there are four axial runs, and there are three center runs, look at that. That is a central composite design with four runs at the corners of the square, four axial runs, and three center runs. So it's an 11 run central composite design. This is what the response surface plot of the prediction variants, the standardized prediction variance, relative prediction variants. In other words, prediction variance up to a constant Sigma square looks like for this design. On this picture, the contours that you're seeing are contours of constant relative standard deviation of predicted y. So this is a possible design that you could use, but I think I would be more inclined to use, let's say, an optimal design. Here's just for convenience of D-optimal design, and your D-optimal design has runs at the corners of the square. Look at that, and at the centers of edges, and then it has two runs at the center. If you look at the contours of constant prediction variance, the contours of constant prediction variants here are lower than they were in this design where we were using essentially a standard design forced into the region. So this is clearly a better choice, although there's one thing about this D-optimal design that I found a little unattractive, and that's that the prediction variance in the center is a little bigger than it is in the immediate neighborhood around that, and you you can see that bulge, that bomp in the response surface plot on the right. So I did something very arbitrary here. I exercised the boss option, which is something you can always do is, I noticed that these runs are replicated. So I took one of those runs from here and moved it to the center, and one of these runs and moved into the center, and that gave me a modified D-optimal design with four runs at the center. Now, I'm going to show you that design in a moment and talk about how it compares to the true D-optimal design. But here is a calculation that I think you'll find kind of interesting, it is the relative efficiency of the standard inscribed design to the D-optimal design. So I took the ratios of the determinants of X prime X inverse. This is the determinant of X prime X inverse for one of the designs, that's the determinant for X prime X inverse for the second design, and I took the sixth root of that, and that ratio is 0.476. Now, what does that tell you? That says that the standard design would have to be replicated about twice to get the same precision of estimation of the parameters as we did in the optimal design. So there is a reason, a demonstrated reason for preferring the optimal design. It gives you estimates of the parameters that are essentially twice as precise as you would get with the standard design. Now, here's the design that I created rather arbitrarily just by moving runs from those vertices into the center. So now I have four runs at the center, and look at the prediction variants surface. Notice how much flatter it is in the center. That bomp that we had with the D-optimal design is gone. By the way, I could probably have used to an I-optimal design here as well, and they I-optimal design would also more than likely have put more runs in the center. Probably would have ended up with many of these vertices, but more runs in the center, and that would have flattened that bomp that we saw in the D-optimal design. Well, how did this work? Well, it turns out it actually works pretty well. The relative efficiency of this design compared to the D-optimal design is about 90 percent. So this design is about 90 percent is good as the original D-optimal design. Now, here's just another illustration of the different designs that you can create with the D and the I criteria. This is just a made-up example of four factors on a cubic design space. The standard design would of course be a face-centered cube, and it would have 24 factorial and axial runs, and then you'd put in two or three center runs, and so you'd have a total of 26 or 27 runs. The second-order model for four factors only has 15 parameters. So you could actually use a minimal design with only 15 runs. But suppose you want to use a 16 run design. Now, there is no standard 16 run design. So let's think about using an optimal design. So the table that you see at the bottom of this slide is the jump custom design where we've used the D criteria. On this slide, you're looking at the prediction variants profile and the fraction of design space plot, and immediately below that, we have the relative variance of all of the coefficients. You notice that the relative variances are fairly small, and they're all pretty similar. Here's the I-optimal design. Again, 16 runs, this was created jump. Once again I've shown you the design matrix, and I've shown you the prediction variance profile. The prediction variance profile indicates that the variance over this region is smaller for the I-optimal design than it was for the D-optimal design, and the fraction of design space plot shows you the same thing. The prediction variance for this design is smaller over the design space than it is for the D-optimal design, and that's what you would expect. Here are the relative variances of the coefficients. They are a little larger than they are for the corresponding D-optimal design. But that's not unexpected because the D criteria focuses on minimizing the variance of these coefficients where the I criteria focuses on minimizing the average prediction variance over the design space. I think this also is a good illustration of why the I criteria is what I would generally prefer for second-order models or for situations were prediction and optimization are really the goal, because it gives you a design that has smaller prediction variances over most of the design space. Although, as you will see in this example, the prediction variance at the boundaries for an optimal design are usually a little worse than they are in the D-optimal design, because you typically don't have as many runs at the boundary of the region.