Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

Loading...

From the course by Johns Hopkins University

Mathematical Biostatistics Boot Camp 2

34 ratings

Johns Hopkins University

34 ratings

Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

From the lesson

Discrete Data Settings

In this module, we'll discuss testing in discrete data settings. This includes the famous Fisher's exact test, as well as the many forms of tests for contingency table data. You'll learn the famous observed minus expected squared over the expected formula, that is broadly applicable.

- Brian Caffo, PhDProfessor, Biostatistics

Bloomberg School of Public Health

Okay, so let's go through our more

Â mathematical de, development, where we're assuming a model.

Â Right, so now, before we were, when we were talking about

Â it as a randomization process, we were kind of conditioning on the data.

Â We said, oh, you have so many treated, you have so many control.

Â You have so many tumors, and so many non-tumors, and we're simply re-doing

Â the randomization process on the computer under

Â the hypothesis that the randomization is irrelevant.

Â Right?

Â That whether

Â you received the treatment or the control was irrelevant.

Â That's one way to think about Fish's exact test.

Â Now, we're going to talk about a different way.

Â So let's let X be the number of tumors for the treated and Y

Â be the number of tumors for the control, and were null hypothesis is going to

Â be H naught p1 equal to p2 equal to the common proportion, where we're going to

Â assume that X is binomial with whatever

Â its sample size was and success probability p.

Â And Y is

Â binomial with whatever its sample size was

Â and binominal probability p under the null hypothesis.

Â Under the alternative they would have to be different.

Â Probabilities.

Â By the way, if you, if this is true, right?

Â If this is true, if both X and Y are a bunch of IID Bernoulli sums, then X plus

Â Y is just the sum of more Bernoullis, n1

Â plus n2 Bernoullis, all with a common success probability p.

Â And so, it, it's an interesting and, and fairly obvious fact

Â that if you add two binomials with a common probability that the

Â sum of the two binomials is also binomial, with a total

Â number of trials equal to n1 plus n2 and the same probability.

Â And this is clear, because if X is comprised as a sum of n1

Â Bernoulli's with probability p, and Y is comprised as the sum of n2 Bernoullis.

Â With probability

Â p and then X plus Y is simply the sum of n1 plus n2 IID Bernoullis

Â with probability p hence its binomial n1 plus n2 and p.

Â So now the way we've characterized the problem now

Â we have two numbers X and Y that are random.

Â Every, in our two by two table, there are no other free numbers.

Â Right? If, if we

Â know X and we know n1 then we know the number of non tumors for the trigger.

Â We know Y and we know n2 then we

Â know the number of non tumors for the control group.

Â So, in that two by two table with know

Â both, we know the margin the, that n1 and n2.

Â And then if we know X and Y then we know the, the second the.

Â The, that which is the first column of, of numbers then we know

Â the second number of columns.

Â so we only have two free numbers in our

Â four numbers in our two by two table there.

Â so, but, we still have one parameter that

Â we don't know, even under the null hypothesis.

Â The null hypothesis says h naught p1 equal to p2 equal to p.

Â Okay?

Â So what if we were to then try and

Â figure out a strategy to get rid of that parameter?

Â Find a distribution that doesn't depend on it.

Â and, and it turns out that the probability Of one of the data points given the sum.

Â And it doesn't matter, we just pick the first data point, you,

Â you could, you get the same procedure if you pick the second one.

Â Probability of X given X plus X equals z, it

Â turns out that this follows the hyper geometric probability mask function.

Â And I give the hyper geometric probability mask function right there.

Â Now what's

Â interesting about this. Is this hyper-geometric mass function

Â is exactly the probability distribution from a couple of slides

Â earlier where we have so many bins of t's and n's and we have so

Â many balls labelled t and c, for treated and controlled.

Â And how, if we randomly allocate treated and controlled

Â balls to the bins.

Â That, you know, the first bin able to hold six balls, and the

Â latter bin being, the, the end bin being able to only hold four balls.

Â And I need to allocate ten balls, five treated and

Â five controls randomly to that process, to the, to those bins.

Â That's the hyper-geometric.

Â Its the, the other way to think about this idea that is the

Â distribution of 2 by 2 table where you're permuting the t's nd the c's.

Â we've in the t's and the n's fixed in the way that we described earlier.

Â Of course that's identical to permuting in the t's and

Â the n's leaving the t's and the c is fixed.

Â so again you wind up, you wind up if you have

Â the same data and you assume that the row margins are the

Â margins that include the randomized treatment or you assume the column

Â margins are margins that had the randomized treatment you wind up with

Â the same procedure provided you have the same data set.

Â So that, that's interesting.

Â perhaps comforting, perhaps discomforting, either way now before remember we

Â only had two numbers, we had two success probabilities, X

Â and Y or in this case it's a tumor; so

Â I'd hardly call that successful, but let's say two success probabilities.

Â Using the convention of calling a binomial event a success

Â regardless of how successful it is.

Â we have the two success probabilities at the onset, when we know

Â the value of the sum, then we only have one left, so

Â in that whole margin when we, when we assumed we only had

Â two free two free cells, given that the, the row margins were fixed.

Â Now we only have one free cell given that the row margins

Â were fixed and now that the sum is fixed.

Â And so, this is exactly what Fisher's ex, est, exact

Â test really tells you is that, the, you know, it, it,

Â as you vary that upper left hand cell or any

Â cell holding both margins fixed you get the remaining three elements.

Â You can obviously, you know, put in a value for the upper left-hand cell.

Â And you can obviously go through the exercise of finding

Â the other three cells, very easily, given the margins.

Â but more than that, we also have this distribution on that cell.

Â the hyper geometric distribution, that can a, that arises if we take

Â the distribution of the upper left hand cell and condition on the sum.

Â Note that this distribution does not contain p.

Â It got rid of it.

Â And there's a mathematical reason

Â for that.

Â It's the so-called conditioning on a sufficient statistic.

Â So, when you condition on the sufficient statistics for p, you get rid of it.

Â In this class, we won't go over that.

Â We won't go over the mechanics of why, we won't go over the

Â mechanics of the logic of how Fisher came up to condition on X plus Y.

Â Or what, how that mathematical, that mathematical development works.

Â suffice it to say for the needs of this

Â class, that when you condition on the sum you do get rid of that probability.

Â And there is a very general mathematical principle that is relying on, the relies

Â on the fact that X plus Y is sufficient for these for the parameter p.

Â Okay, so let's derive this conditional distribution.

Â So we know the probability of X.

Â it's just the binomial probability, here.

Â We know the probability of Y, and let's say z minus x.

Â This'll make the derivation a little bit easier, but we can plug in anything here.

Â Here provided z-x in an integer between 0 and n2.

Â Then it's this binomial probably right here, and

Â then we said already that X plus Y

Â is binomial, so the probability that X plus

Â Y equals z is this probability right here.

Â [NOISE]

Â Okay, now putting everything together, the probability X equals x.

Â And X plus Y equals z over the property X plus Y equals z.

Â That's exactly this conditional probability just

Â using our rules of conditional probabilities

Â is that we know quite well from mathematical bio statistics boot camp one.

Â And then if X equals x and X plus Y equal z then that's the

Â same thing as saying X equals x and Y equals z minus x of course.

Â And then X and Y are independent, so we can, factor those two possibilities.

Â And then if, then on the previous slide we had all three of these expressions

Â plug-in, and you'll find that you wind up

Â with the hyper-geometric distribution that we described before.

Â Coursera provides universal access to the worldâ€™s best education, partnering with top universities and organizations to offer courses online.