There's a related idea, in fact numerically equivalent idea, of this so-called attained significance level. Our test statistic was two for H-not mu 30 versus H-a mu different from 30 when we said that n was 100. Okay. So here's our normalized test statistic distribution under the null and our test statistics works out to be two and we said for a one sided test that if we set alpha alpha to be 5%, 5% right, then we got 1.645 there. We set that to be 5%, our 2 was bigger than it. And we would reject the null hypothesis. What if it was .1%? That would be way out here so that there was .1%. If we set alpha to that level, we would fail to reject. What would be the alpha level, for which that you exactly achieve this balance, where if you were to have picked an alpha level larger than it, you would reject, and smaller than it you would fail to reject. I think you can see, if we picked up here, we're going to reject. If we picked down here so that our alpha level was this guy, we would, I'm sorry, if we picked up here, we would fail to reject. And if we picked up here, we would reject. So I think you can see if you were to move it so that the line exactly overlapped with your test statistic, you would be exactly at that point. And then, that alpha level would be the alpha level for which you fail to reject if the alpha level were smaller than it. And you reject if the alpha level were bigger than that. And that guy, is called the attained significance level, kind of for obvious reasons. It's sort of saying what's the smallest significance level I could choose and still reject. So this so you can see this is equivalent to the P value right you can see that by moving these lines around until we get to that perfect alpha that probability of being larger then our test statistic we'll that's also just the P value. So its philosophically different but its the same quantity. Its the same number. So at any rate, there's an interpretation difference between the attain significance level and the p value but there's not a practical difference because they're the same number. One reason I like P values is if someone gives you a P value then the reader or interpreter of your test can perform the hypothesis test of whatever alpha they like. If they reject, if the P value's smaller than alpha and they fail to reject if the P value's larger than alpha. Okay. So that's one reason why I like P values. Is because then the person can calibrate the test however they'd like. If you just tell them I reject or I didn't reject, you haven't given them the same amount of information. For two sided hypothesis test you double the smaller of the two one sided P values. Right so you know if your Z distribution looks like this and you got a test statistic right there here's the P value for that direction hypothesis test. Here's the P value for that direction hypothesis test. You take that guy and double it, and that's your P value. Okay, let's calculate a P value not just for a normal example, which we've already done. Let's calculate a P value for this binomial example. Your friend has 8 children, 7 of which are girls and none are twins. If each gender has an independent 50% probability for each birth, what's the probability of seven or more girls out of eight births? That's the P value, the probability of getting, if we're testing the hypothesis that H naught P equal to .5 versus the Ha P greater than .5 more evidence would be, more of the children being girls and so the probability of getting our observed amount was seven. The probability of getting seven or more, right, is the probability of seven plus the probability of eight. Works out to be about 4%. And then you can just do this with pbinome. Remember, you have to do six because if you do lower.tail equals false it does strictly greater than. So it starts counting at seven when you put in six. But if we were to put in seven it would start counting at eight. Let's do a Poisson example. Suppose that a hospital has an infection rate of ten infections per 100 person days at risk, which is a rate of 10/100 or 0.1, during the last monitoring period. Assume that the infection rate of 5% is an important benchmark. Given the model, could the observed rate? Yeah I shouldn't say 5% given an infection rate of .05 per day at risk is an important benchmark. So given the model, could the observed rate being larger than .05 be attributed to chance? That's what we want to test. So we want to test whether H naught lambda will equal .05, so if lambda was 0.05, then lambda naught times 100 is 5, right? So if the rate is 0.05 per day, then the rate when monitored for 100 days should be 5. So we want to test Ha lambda greater than 0.05 because we're interested in whether our infection rate is higher than this benchmark. So we would do our Poisson probability of getting more than 9 with a rate of 5. We do this, again, more than 9 because it calculates the probability with strictly greater than. So this'll count 10, 11, 12 and so on, okay? And that gives us a probability of 3%. So, if it was a one-sided test, we would reject. If it was a two sided test you double it, right, you double the smaller the of the two one sided hypothesis test. And so that gives you three different settings in which you can calculate P values. We went over a normal P value which is pretty easy. We went over a binomial P value and then we went over Poisson P value. And in each case we did the same thing. We specified our hypotheses, we calculated the probability of getting a test statistic as or more extreme than was actually observed. With this probability was calculated under the null hypothesis. That quantity is a P value. You're going to reject if your P value is small. Smaller than your alpha level, and you're going to fail to reject if your P value is larger than your alpha level. And now that you know what a P value is, you can go and read some of those references and see all the fighting about it.