In this video, I'm going to talk about AB testing specifically in terms of the different types of tests involved. That's really a function of the type of data you have collected. So let's dive in. In all of these tests, there's a null hypothesis, and an alternative hypothesis. The null hypothesis states that there is no difference in the outcomes. Basically, in other words, group A and group B have no difference. While the alternative hypothesis provides evidence that there is a difference in the outcomes. So let's look at the one-sample case. So this is not really AB testing assist. You have one group of set of data, and you want to know if the mean of this group is different from zero. That's usually the thing that you're testing. The command in R is t.tests. We'll look at some more examples in the next video. But the null hypothesis states that there is no different from zero, and the alternative hypothesis says that the group mean is different from zero. On this slide, I have the formula for the one-sample t-test. There's the t-test. You take your sample mean. So you have your data, you calculate your sample mean, that's x bar, and then you subtract off the population mean which you're testing against. In this case, since it's a one-sample tests, we generally tests to see if the mean is different from zero. So you can just put zero in here for Mu, and then you're going to divide by s over n. S is the estimate of your standard deviation of your population, and n represents your sample size. The t-test assumptions are x bar follows a normal distribution with a mean of Mu. So the x-bar is the same as Mu. However, the standard deviation as this Sigma over the square root of n. Then the sample variance follows a Chi square distribution with degrees of freedom n minus 1. The next AB tests that I want to discuss, is if you have equal variances and equal sample sizes of your two groups. So the conditions, sample sizes of the two groups are the same, and we denote that mathematically by n sub a is equal to n sub b. So the number of elements in each group is the same. The two populations have the same variance. So if they're drawn from the same population, you might be able to make that assumption. So here are the formulas for the equal sample sizes, equal variances, and here's a test against Mu. You're testing x_A, a sample mean of group A, and minus the sample mean of group B, and let's just look at the denominator for a second. If they're the same, it's going to be equal to zero. If they're equal to zero, that means the two sample means are the same. The alternative hypothesis says there are different. So if x_B is bigger or less than A, then they're considered to be different. It's scaled by this standard deviation value down here. In the one-tail test you had s over the square root of n. But here you have this pooled standard deviation, because the variance are equal. So the way that works is you take s_A square plus s_B squared, so the variance of group A, the variance of group B, you add them together and take their average and divide by 2, and then you take the square root of that. That's how you get the pooled standard deviation. Then, so you take the difference between the two means and you scale by this factor of the pooled standard deviation. The degrees of freedom are now 2n or the total number of elements in both or subjects in both groups, minus 2. Which you've used to calculate the sample means. Then here's the R code which I'll show you in a later video. Here is another type of AB testing, a t-tests called Welch's t-test. Here, we have the relaxed the assumption that the sample size need to be the same. So the sample sizes of the two groups can be equal or unequal, and the two populations have normal distributions but they may not have the same variance. So you might be pulling from two different populations. Again, you're testing this, the numerator is the difference between the two group means. So if they're zero, they're the same, if they're not zero, they're going to be different. But then you're scaling by this pooled value, which is the variance of group A, divided by n, the number of elements in A, plus the variance of group B, divided by the number of elements in Group B, and you add those together and you took the square root. There we go. Right there. That's how you do Welch's t-test. The degrees of freedom is as complicated formula. I'm not going to go into it in too much detail, partially because R will calculate that for you. But if you're interested, this is how you would calculate it. In this last type of t-test that I want to discuss, is called the paired sample t-test. That's where the samples are dependent. Either one sample has been tested twice or two samples have been paired. Then the difference between the pairs should be calculated. So what do I mean by a paired sample? The easiest way to think about it is, say you have a set of subjects and you evaluate them on some metric. You tests them. Say a bunch of students, you give them a pretest. Then you perform your experiment. So they get some treatment, and then you do a post test. So now you're testing the same students, who have been tested before and after some treatment, and you want to know if the treatment worked. So that's where the pairing comes in. It's students A before students A after, those are paired in your sample data, Student B is paired with student B before and after etc. So here you can see the average where x_D is the paired sample of the differences, and here's the standard deviation, degrees of freedom is n minus 1. Here again, is another example for an individual, a pre-test and a post-test of an advertising campaign. I think that wraps up the major forms of AB testing statistics that are out there, and now I want to show you how to implement these tests in R.