So this next set of lectures will continue

the momentum we picked up for comparing means between

two populations and extend that idea to comparing proportions between two populations,

incidence rates, and time to event curves.

What I want you to focus on in this next set is not necessarily the mechanics of

each test that I showed you because the computer will take care of that detail,

we really have to focus on interpreting the results

which is harder in the context of science.

But I want you to really focus on the principles and you'll see that what we're

doing whether we do a two-sample z-test for comparing proportions,

a chi-squared test for comparing

proportions which is equivalent to the two-sample z-test,

a Fisher's exact test for

small sample proportion comparisons or when we do incidence rates,

we do a two-sample z-test in

that situation or a log-rank test for comparing survival curves.

Yes the names are different and the mechanics are slightly different but conceptually,

these tests all follow the same mantra that we

set up when we we're comparing means between two populations.

We establish a null and alternative hypothesis,

we then assume the null to be true namely that

there's no difference in whatever our summary measure is

at the population level between

the two populations we're comparing or that their difference is zero.

Then we compare our study results to that expected difference of zero standardizing by

how variable study results from the study of our size could

be from study to study just by random chance.

So we measure the standardized distance between what

we observed and what we'd expect under the null hypothesis,

and then ascertain whether that's within a likelihood of occurring when the null

is true or far out compared to what else could have happened when the

null is true and we do that again through a p-value.

So conceptually, everything we're doing is the same,

it's just the mechanics will change.