Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

Loading...

From the course by Johns Hopkins University

Mathematical Biostatistics Boot Camp 2

41 ratings

Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

From the lesson

Two Binomials

In this module we'll be covering some methods for looking at two binomials. This includes the odds ratio, relative risk and risk difference. We'll discussing mostly confidence intervals in this module and will develop the delta method, the tool used to create these confidence intervals. After you've watched the videos and tried the homework, take a crack at the quiz!

- Brian Caffo, PhDProfessor, Biostatistics

Bloomberg School of Public Health

Of thumb.

Â Okay, so now, let's actually get to comparing

Â two proportions rather than simply looking at one proportion.

Â So we want to test whether the side effects is

Â the same in the two groups or, or different.

Â so imagine if a is some new formulation and b is the standard and

Â you want to test whether or not the new formulation has, has more side effects

Â than the standard.

Â so in general for two by two tables I'm going to use the following notation.

Â I'm going to you know, have x.

Â n1 minus X, and n1 plus Y, and, n2 minus Y and n2 plus, and then, if I need to, I'l

Â refer to the four cells, indexing them by their matrix coordinates, n11, n12,

Â n21, n22. I'll call n1 the, the right margin, n2.

Â The right n1 the right top margin into the right bottom margin

Â but if, in, in, in the case that I'm referring to both

Â margins, I'll say n1 plus, n2 plus, n plus 1, and n plus 2, for

Â the, for the respective margins, in other words, just summing.

Â The notation meaning summing over that index.

Â Okay.

Â So now, let's do a, a score test type, test of a hypothesis that p1 equals p2.

Â So our null hypothesis is h not p1 equals

Â p2 versus not equal to, greater than or less than.

Â and then score test for this null hypothesis

Â are, are, are numerator is p1 minus p1

Â [INAUDIBLE]

Â minus p2

Â [INAUDIBLE].

Â The sample proportion in group 1 minus the sample proportion in group 2.

Â And then if, if we were assuming that this difference was a constant other than

Â 0, we will put that in the numerator

Â here the null hypothesis difference but it typically.

Â The null hypothesis is that they're equal.

Â So there's minus 0 here to hypothesize null value

Â of the difference so we can just omit that.

Â And then in the denominator, the the, under the hypothesis

Â that p1 equals p2, then the stand, the variance of p1

Â hat minus p2 hat. Is p times 1 minus p, quantity times 1

Â over n, 1 plus 1 over n2, where p is the common proportion p1 equal to p2.

Â So, under the null hypothesis, we need an estimated version of that

Â if we're going to actually get a number here that we can use compared

Â to a normal quantile. So we need a value of p

Â to plug in there. So we say plug in p hat if under the null

Â hypothesis the sample proportions are identical then group A

Â is a bunch of IID draws IAD Bernoulli draws from group 1.

Â Group B is a bunch of IID Bernoulli draws. From group 2,

Â but they have the same proportions so we really just

Â have n1 plus n2 Bernoulli draws and our estimate of the

Â proportion would simply be the total number of events, so

Â that p hat is X plus Y over n1 plus n2.

Â And that is exactly the

Â [UNKNOWN]

Â for p.

Â The common proportion under the null

Â hypothesis that the due proportions are equal.

Â So, we plug that into the denominator p hat times 1 minus p hat, and then we

Â get, our test statistic which is just estimate

Â minus hypothesis, hypothesize value divided by the standard air.

Â And then this statistic is normally

Â distributed under the null hypothesis for large

Â n, and standard normally distributed under the null hypothesis for large n1 and n2.

Â So if we want to invert this to create a confidence interval, well we don't have

Â a closed form like we do in the score task for a single proportion.

Â the walled interval is p1 hat minus p2 hat.

Â And then it, it doesn't utilize

Â the fact that, under the null hypothesis, the proportions are equal.

Â So then you just have a separate p1 hat, 1 minus p1 hat.

Â Or m1 plus p2 hat. 1 minus p2 hat over n2.

Â In the denominator, square root the whole thing.

Â and you can of course invert that to get a, a confidence interval.

Â P1 hat minus p2 hat plus or minus Z1 minus

Â alpha over 2 times the square root of the standard error.

Â by the way do you see why you can't invert the, the score test?

Â The reason being.

Â That if you change the, the, the denominator was explicitly

Â calculated under the specific null hypothesis that p1 equals p2.

Â Here in this test statistic, if we were to have a different null, that

Â p1 minus p2 wasn't just equal to 0 but was equal to some other value.

Â We could, we, we would add that into

Â the numerator, and the, the denominator wouldn't change.

Â Whereas, in our score test we wouldn't have any, anyway to

Â adapt that denominator and that's there are no immediate way to

Â adapt, adapt the denominator and that's why you have to use

Â some, some programming to get the competent interval from that one.

Â But this one, the wald test, we can invert very easily

Â and we get an interval that should be fairly familiar to us.

Â P1 hat minus p2 hat plus or minus the

Â normal quantile times the square root of the standard error.

Â That's the, the so called wald interval, it's very easy.

Â To calculate and its taught in nearly every statistics text book.

Â So it, it, this, this performs poorly. This Wald interval

Â performs poorly and its relative to this score interval in, in test.

Â The Wald test and

Â the Wald interval perform relatively poorly.

Â But, but they're, they're decrease in performances

Â less so in the one sample case.

Â In the one sample case there is a huge decrease in performance but, but the

Â subtraction in the 2 proportions you know,

Â subtracting two things tends to make them more

Â normally distributed so it helps a little

Â bit and the, the decrease in performance Wald

Â interval so is it any where near as that as it is in the single proportion

Â [INAUDIBLE].

Â Case.

Â U, so for testing I would just say always use the score test, that's easy.

Â For intervals, inverting the score test is hard

Â and it's not in standard software, so our

Â simple fix that we propose in, in an

Â American statistician paper is to add one success and.

Â And, and one failure in each group. So calculate p1 tilde

Â which is x1 plus x plus 1 over n1 plus 2, n1 tilde which is n1 plus 2,

Â p2 tilde which is y plus 1 over n2 plus 2 and n2 tilde which is n2 plus 2.

Â So, this is exactly taking this two by two table.

Â that has the successes and failures for each group and adding one to every cell.

Â That's exactly what this is.

Â And then just treat that as if it's the data and construct a Wald interval.

Â And this interval it doesn't approximate the score interval

Â like the, in the, in the, in the Agresti-Coull Interval.

Â but it does perform better than the Wald interval and

Â I'll have a slide in a second to show you this.

Â Okay so let's just perform the test the score test, test whether

Â or not the proportion of side effects is the same for the two drugs.

Â Pa had 0.55 pb hat is 5 over 20 which is 0.25.

Â p hat, the common proportion, is 16 over 4,011 plus 5 over 20 plus 20, which

Â is 0.4, so our test statistic is 0.55 minus 0.25 over 0.4 times 0.6

Â times square root 2 over 20, sq-, I'm sorry.

Â Square root the whole thing.

Â You, anyway.

Â You can plug in the formula. You get 1.61.

Â And then we fail to reject h, not at the 5% level.

Â In other words, you compare it with 1.96 for a 2 sided test.

Â The two sided p value calculate the probability that a standard,

Â the absolute value of a standard normal is bigger than 1.61.

Â Which is that the positive part of a normal is

Â bigger than 1.61 plus the probability that the negative part of a normal is below

Â negative 1.61. that's I guess 0.055 in either tail.

Â So we fail to reject, there's our p value.

Â and so hopefully everyone can do this calculation

Â very easily at this point in the class.

Â Okay.

Â So, here is the same picture as before where, in the

Â previous picture I showed the true value of the proportion by the

Â coverage rate of the interval, for the single proportion.

Â Now here's there's two proportions, p1 and p2.

Â So here by the true value of p1 and p2, here's

Â the coverage probability on the left, I have the Wald interval.

Â On the right I have this Agresti-Caffo interval where

Â you add one to one success to one failure

Â to each group, one to every cell in the two by two table.

Â And you can see that we get these big

Â kind of dips down toward 0 on the Wald interval.

Â If, if either of the proportions is, is, is

Â if either of the proportions is either very low or

Â very high you get very bad performance and you get

Â you know, performance that's well below 0.95 and this shrinkage

Â towards 0.5 for each of the means for each of the proportions you

Â know, improves things dramatically and it's a very easy thing to do.

Â And then here's a simple another exact same, same plot.

Â just some cross sections through it of different sorts.

Â In the top ones I have where p1 minus p2 equals particular values and

Â then on the bottom one I have ones where ratios of p1 and p2 are fixed.

Â In other words, it's just sort of slices maybe not slices or

Â curves through that, that two dimensional picture and it again it just

Â shows that in a, in a nice easy 2D plot what the

Â Relative performance of the Agreti-Caffo interval is relative to the Wald interval.

Â Okay, let's briefly go over some likelihood plots

Â and, and Bayesian analysis of two binomial proportions.

Â So, likelihood analysis requires the use of profile likelihoods or some other

Â technique to reduce the dimension down, if you want to do a 1D likelihood plot.

Â and we can actually show you later on away

Â you can use the so-called non-central hyper geometric distribution

Â to get an exact likelihood plot for the odds ratio.

Â But for the difference in the proportions it's a little harder.

Â Probably doing a profile likelihood would be the way to go.

Â So is a little hard, so let's, let's.

Â leave that discussion for, for elsewhere.

Â So, instead let's talk about being a Bayesian.

Â So imagine, so we talked about, for a single binomial proportion, butting

Â a beta prior on a, on a probability to get a posterior.

Â So so

Â imagine putting an independent beta alpha 1 beta 1

Â prior, and an inde, and a beta alpha 2 beta 2 prior.

Â p1 and p2 respectively, then the posterior so remember how the

Â calculation goes. You take likelihood times prior equals

Â posterior. so here the likelihood is p1

Â to the x1, 1 minus p1 to the n1 minus x1.

Â P2 to the y2, 1 minus p2 to the n2 minus y2 and

Â then the beta prior is p2 to the alpha 1 minus 1, 1

Â minus p1 to the beta 1 minus 1, p2 to the alpha 2

Â to the minus 1, 1 minus p2 to the beta 2 minus 1.

Â So if we multiply all those together we get this formula right here.

Â Which exactly shows that if we have two independent

Â binomials and then we multiply them by two independent betas, we

Â wind up with an independent a pair of independent Beta posteriors.

Â One Beta posterior for p1, one Beta posterior for p2 where now the

Â Beta parameter is no longer alpha one but alpha one plus x1 for p1.

Â And the beta parameter for, for p1 is n1 plus beta 1.

Â The, the alpha parameter

Â for p2 is y plus alpha 2. And the beta parameter for p2 is 1 minus.

Â Is, n2 plus beta 2.

Â So it's basically like, alpha and. Alpha 1 and beta 1 are the.

Â the, the, the beta, alpha and beta

Â parameters for p1, a priori, after you factor

Â in the data, the just, you add the

Â successes to alpha and the failures to beta,

Â you add, and, and, the same for, for p2,

Â and then you get the, the, the beta posteriors.

Â And the easiest way to explore this posterior is

Â with Monte Carlo simulation and I'll show that right here.

Â So it's, it's very simple. So here, I, I define my x, my n1, my, my,

Â alpha 1, my my beta 1, my y, my n2, my alpha 2 and beta 2.

Â And here I just did a uniform pr-, prior, so, so if

Â I have a beta with a 1 and a 1, that's just uniform.

Â So I put a uniform on both p1 and p2.

Â Then I'm going to sample from the posterior.

Â So, I just simulate random data as

Â a simulated a thousand data pairs, we're now in my

Â alpha parameters x plus alpha 1, n minus x plus

Â beta 1 and then for p2 my alpha parameter is

Â y plus alpha 2 and n minus y plus beta 2.

Â So, imagine if I want to look at the risk difference.

Â Read here the risk of side effects. P2 minus P1 is the parameter I want and it

Â does so, here p1 is, is a bunch of, of, posterior p1 simulations.

Â P2 is bunch of posterior p2 simulations.

Â If I subtract them, r does it component by component,

Â so I get a collection of a thousand risk differences.

Â I could plot the density of the risk differences in the next line.

Â I could calculate the, the lower 25th and the upper 97.5th quantiles of

Â these simulations to get Bayesian credible interval for them.

Â I could calculate the posterior mean and

Â I could calculate the posterior median. And

Â In the, in the, in the next side you see exactly this.

Â I, I have some r-code called twoBinomPost, which

Â I'll, which is on the get hub repository.

Â But also will be on the I'll put on the course website.

Â it, it puts out the mean.

Â The median for those three, the mode

Â for those three, and equi-tail confidence intervals.

Â Well, what I mean by equi-tail confidence intervals, I mean

Â it's 25% in the lower tail, in, in, in, 90.

Â 7.25, 2.5% in the lower tail and 2.5% in the upper tail which I think we discussed

Â on the on the for the one sample binomial case we discussed that maybe its better

Â not to do equi-tail confidence intervals but or credible intervals but in

Â this case its easy enough to do it that way so why don't we just do it that way.

Â and you know, go through the twoBinomPost code.

Â It's very simple to do this.

Â And here what I'm showing is the posterior for the risk difference.

Â And this is what's nice about Bayesian intervals.

Â So here we're simulating p1 and p2 a posteriori.

Â So we're getting the posterior joint, draws from

Â the joint posterior distribution of p1 and p2.

Â Any function of p1 and p2 that you then want to investigate, it becomes very easy to do.

Â Any function of p1 and p2 that you

Â then want to investigate, it becomes very easy to do.

Â And so here I took the risk difference and plotted

Â the density.

Â I put some blue lines where the credible interval occurs, and I bel, The red

Â line is, is identically at 0, and so you can see that 0 does fly

Â within the credible interval which can also

Â be seen with the posterior, where there are

Â kind of more, what points are better

Â supported by the data for the risk difference.

Â And even though 0 is in our credible interval, you know.

Â it's not a, a terribly well supported value in the, value in the data.

Â I should say, it's not a terribly well supported value, a posteriori.

Â Well, that's the end of the lecture.

Â That was a whirlwind tour of, of,

Â risk different style intervals for 2 binomial proportions.

Â I'm hoping at this point that a lot of these topics in the class will start to

Â come very easily to you, because we're just kind

Â of using the same techniques over and over again.

Â And I look forward to seeing you for the next lecture.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.