In a two-mean equal variance t-test, the test is between two sample means, X bar one versus X bar two, when sigma one and sigma two are unknown, but are considered equal. The hypothesis is, H naught mu1 equal mu2 and the alternative mu1 is not equal to mu2. And we use these formulas where s sub p is the pooled standard deviation. N one and n two are the two sample sizes, s one and s two are the two sample standard deviations. Let's take an example, compare product weight data from two machines with 95 percent confidence. We calculate the pooled standard deviation as 0.0091 the test statistic as 2.43 and the degrees of freedom as eight. Given our hypothesis of h naught of mu one is equal to mu two and our alternative, mu one is not equal to mu two, we looked up our critical value in the table for t sub 0.025 with a degree of freedom of eight and we get 2.306 for a two-tailed test alpha equal 0.05. Since t is greater than t sub c or 2.43 is greater than 2.306, the null hypothesis of mu one equaling mu two is rejected. Therefore we can conclude that based on equal variance, there is a statistical difference in the population means. Somewhat similar in the two-mean unequal variance t-test, the test is between two sample means x bar one versus x bar two, when sigma one and sigma two are unknown, but are not considered equal. The hypothesis test, the null is mu one is equal to mu two as before and the alternative is mu one is not equal to mu two and we use these two formulas, where x bar one and x bar two are the two sample means in sub one, in sub two or the two sample sizes, s sub one and s sub two are the two sample standard deviations. Using our last example, suppose that the standard deviations were different. Instead of pooling s one and s two, we include them separately. Plugging into these equations yields t equal to 2.440 with a degree of freedom of 5.83. We will round this off to five. So our hypothesis is h sub zero is mu one equal mu two or it's not equal. The critical value of t sub 0.025 with a five degree of freedom is 2.571 or a two sided test with alpha equal point 05. Since t is greater than t sub c or 2.440 is less than 2.571, the null hypothesis of mu one equal mu two is not rejected. Therefore we can conclude that based on unequal variance there is no statistical difference in the population means. A paired t-test will test the difference between two sample means. This is best use when you have before and after scenarios, like the measurement of a sample part before and after heat treatment, or before and after calibrating equipment. This test can show you any statistical difference in the before and after sample means. The procedure is as follows: Set up the hypothesis as h naught is mu one is equal to mu two and the alternative that they are not equal. Number two, find the difference between each pair of data by subtracting one from the other. Number three, calculate the mean d bar and the standard deviation s sub d for all the differences. Number four, let n be the number of paired differences. Number five, use these formulas; t equals d bar over s sub d over the square root of n, and your degrees of freedom is n minus one. Number six, compare t to t sub c. Reject h naught if t is in the reject region, otherwise do not reject h naught. All right, let's take an example. Compare product weight data from the two machines with 95 percent confidence. Using our previous example, we added a third column to this table for the differences including d bar and Sd values. Plugging the data from the table into the t statistic equation yields t equal 3.261 with a degree of freedom of four. For our hypothesis test, h naught, that mu one is equal to mu two and the alternative that mu one is not equal to mu two. The critical value, t sub 0.025 with four degrees of freedom equals 2.776 or this a two sided test with alpha equal 0.05. Since we find that t is greater than t sub c or 3.2261 is greater than 2.776, the null hypothesis is rejected. Therefore we can conclude that there is statistical difference in the population means. The F-statistic is a ratio of two sample variances. The F-test uses F-distribution under the null hypothesis and it's most often used when comparing statistical models that have been fitted to a data set to find which best fits the population. Apply to cases comparing the precision of two measuring devices or the relative stability of two manufacturing processes. The procedure for the F-test is as follows: Set up conditions for the populations are normal and samples are independent. Number two, set the hypothesis. You have three choices, (a) H naught states that sigma squared sub one equals sigma squared sub two and that the alternative is that they're not equal. Sigma squared sub one is not equal sigma square sub two. Or (b) the H naught is that the sigma squared sub one is less than sigma squared sub two and the alternative is that it's sigma squared sub one is greater than sigma squared sub two. Or (c) h nsub aught that sigma squared sub one is greater than sigma squared sub two and its alternative, sigma squared sub one is less than sigma squared sub two. Number three, find the critical values in the F-table. Number four, calculate the test statistic using these formulas; F equals s sub one squared over s sub two squared and the degrees of freedom we have two different ones. One for each set of data we call df1 and then the table is called v sub one. So it would be, n minus n one minus one and for the second set it would be n two, minus one. Number five, we compare F to F sub c. We reject the null hypothesis if F is in the reject region, otherwise we do not reject H_0. In this example, we want to compare two manufacturing processes before and after. We want to determine if improvements last year made any statistical difference to the variances. Assume a 95 percent confidence level. We plug the data into the formula and get F equal nine with degrees of freedoms of eight and six. Since we are testing for process improvement, the hypothesis that we use is a sub zero equal sigma squared sub one is less than sigma square sub two, and the alternative is that, sigma squared sub one is greater than sigma squared sub two to. The critical value, F sub 0.05 with degrees of freedom eight and six, is 4.15. This is a right tailed-test with alpha equal point 0 5. Since F is greater than F sub c or nine is greater than 4.15, the null hypothesis that sigma squared sub one is less than or equal sigma squared sub two is rejected. We conclude that there is significant evidence to indicate a reduced variation, thus an improvement to the manufacturing process is statistically valid.