We're back again, and in this lecture, we're going to talk about finding the alias relationships in fractional factorials. Earlier in the chapter, we showed how to use the complete defining relation for a fractional factorial to find the aliases. This works really well in simple designs like the fractions we most recently used, those are called regular fractions. We'll talk about why that's the case later. But in more complex fractions and other designs, that method doesn't work very well because some of these designs don't really have defining relations. So we need to find some other more general way to do that. There is fortunately a general method that is available that will work for finding the aliases in any type of fractional design or any type of design which has a structure in which you might be interested in what happens if there are other missing effects that you are concerned about in estimation. So let's use the regression model representation of the linear model Y equal to X_1 Beta_1 plus Epsilon. Now, Y is an in one vector of your observations, and X_1 is an inbound P_1 matrix that contains the design matrix for your experiment expanded to the form of the model that you want to fit. Beta_1 is a piece of one by one vector of the model parameters and Epsilon is the usual random error term. Well, the least squares estimate of Beta_1 is just Beta hat 1 and that's X_1 prime X_1 inverse times X_1 prime Y. Now let's suppose that the true model has some additional terms. The true model is Y equal to X_1 Beta_1 plus X_2 Beta_2 plus Epsilon, where X_2 is an inbound P_2 matrix that has additional variables that are not in your fitted model and the columns of that matrix are created or generated from the design matrix as well. Beta_2 is the vector of parameters associated with those variables. We can show that the expected value of Beta hat 1 is Beta_1 plus this matrix X_1 prime X_1 inverse times X_1 prime X_2 times Beta_2. We call that matrix A, the alias matrix. So the expected value of Beta-hat 1 is Beta_1 plus this alias matrix times Beta_2. The elements of that matrix A that operate on Beta_2, identify the alias relationships for the parameters in the vector Beta_1. Now let's see how this works. Let's take a really simple, familiar example, the half fraction of the 2_3. So we have a 2_3 minus 1. I was equal to A, B, C, or some people might write that as I equal to X_1 times X_2 times X_3. Let's suppose that the model that the experimental wants to fit is the main effects. So Y equal to Beta_0 plus Beta_1 X_1 plus Beta_2 X_2 plus Beta_3 X_3 plus Epsilon. That's the model we're going to fit. So in the notation that we've defined previously, the vector Beta_1 would be made up of the regression coefficients from the model we plan to fit, which is just the main effects model. Beta_0, Beta_1, Beta_2, and Beta_3. X_1 would be the 2_3 minus 1 expanded model form. So we simply have to add a column of ones to represent the intercept. So that's how we create X_1 from the design matrix. Now let's suppose that the true model that you plan to fit has the main effects, but it also includes the two-factor interactions. So this is what's really going on out there and even though you're going to fit only the main effects, this is reality. So now Beta_2 would be made up of the quadratic regression coefficients. Beta_1,2, Beta_1,3, Beta_2,3, and X_2 would be the columns representing those interactions. X_1 times X_2, X_1 times X_3, and X_2 times X_3; and those columns would be created from the design matrix. Now let's form X_1 prime X_1. Well, that's easy because this design's orthogonal and so X_1 prime X_1 is just an identity matrix, 4 by 4 with fours on the main diagonal. X_1 prime X_2 looks like this. The first row are zeros and then the succeeding three rows have two zeros, but the four is marching down that opposite diagonal. So now let's find X_1 prime X_1 inverse. Well, that's just again identity matrix of order four but it now has one-fourth on the main diagonal. So the expected value of Beta-hat 1 is Beta_1 plus A Beta_2. So A is just X_1 prime X_1 inverse times X_1 prime X_2. So when we multiply that out, this is what we get, that is the alias matrix. So now when we multiply the alias matrix times Beta_2, we get this, and adding those two vectors together, we see that Beta_0, the intercept, does not have any term that aliases it. But Beta_1 actually estimates Beta_1 plus Beta_2,3 and Beta_2 actually estimates Beta_2 plus Beta_1,3 and Beta_3 hat actually estimates Beta_3 plus Beta_1,2. In other words, every main effect is aliased with a two factor interaction and that is exactly what we know happens in the general case of a 1.5 fraction of a 2_3 using the principle fraction and I equal to A, B, C. So this is a very general method that can be used to generate the alias relationships in any design where the model that you fit is lower order or different than some model that you fear.