In this last section, I want to say something about the alpha value and errors in hypothesis testing, now the alpha value, that's the 0.05 that we see most often. A level of significance, some cut or value. Sometimes we see it as 0.01. I'm going to spoil the fun. Where does this 0.05 come from? Is it really magical if we find a value less than 0.05 is it absolute proof that something exists that drug is better than placebo. Can I now walk out and absolutely believe it? No I can't. There is no magic behind 0.05, you know now it's just the area under the curve. The curve of a sampling distribution. It would be rare to have found through taking a sample and doing that analysis, it'd be rare to find the results that we have found, but it's all that says. It was just at a certain level of rarity to find that difference. It really is just thumb sucked, if we look at the area of physics, they go many, many, many more standard errors away from the mean before they would accept something For instance, the large hadron collider. In medicine, we choose 0.05 or 0.01, if we chose something like 0.0000001, we would have nothing to publish. We wouldn't have any good findings to look at. So 0.05 is this acceptable level of risk. Now, we do state that it's not actually 100% true, it does, we do need to take into consideration the pre-test odds and we'll say something about a little bit later. But if we leave pre-test odds for finding a difference just outside the formal, it really is an acceptable risk, this 5% risk that we do take. That is what the alpha value really tells us. Now there are some errors that we could make. Now the first of all, we could falsely reject the null hypothesis. Drug A was really not better than placebo, but in the analysis we did, in the sample that we selected we found a p-value of less than 0.05. And we do reject that null hypothesis which in reality should never have happened, and that is a Type I error. The Type II error, the reverse actually happens, drug A was better than placebo but the analysis we did showed a p-value of more than 0.05. So we do not reject, we failed to reject that null hypothesis, when in actual fact we should have. We don't know this, that is what our analysis is. But that merely is a Type I and a Type II error, and this is the type of table that you can just usually memorize just to remember those. If you look at the top, if our null hypothesis is indeed valid, and at the bottom we failed to reject it, then that's correct. That's what we want. Conversely, if the null hypothesis in reality, we don't know this that's why we do research, but if we could know that it really was invalid and we do reject, that's also correct. On the opposite of the side of the coin though, if the null hypothesis is valid, and we reject it, that's out Type I error, that's the 0.05, and if it's invalid and we fail to reject it, we make a Type II error.