So in this next set of lectures, we'll continue our journey,

taking what we've got in the sample to make a statement about the larger unknown

population truths.

So with confidence intervals, we've started with our estimates.

And subtracted a fixed number of standard errors,

using the properties of the central limit theorem to get an interval that

likely contained the unknown population truth.

But when we're comparing two populations through two samples,

there's another approach we can take.

We can instead start with two possibilities for the truth,

and then choose between one of them based on the results from the same data we would

use to create a confidence interval.

And this approach is called statistical hypothesis testing.

And there's many hypothesis tests depending on the outcomes we're looking at

to compare, whether they be means, proportions, incidence rates, etc.

But they all take the same approach, and it's only the mechanics that change.

One of the biggest known numbers in statistics is the p-value, and

that's pretty much the end result of a hypothesis test.

And we'll talk about how hypothesis tests and

confidence intervals are complementary.

But really there's a lot more information about what you're trying to understand in

a confidence interval than there is in a result from a hypothesis test.

So while p-values have a place, and

we certainly want to recognize what they can tell us, we'll be constantly checking

in throughout these next two lecture sets as to what they can't tell us as well.

So we'll begin our journey in this lecture by comparing means between two

populations, both in the paired and unpaired study designs.

And then we'll debrief a bit on the p-value for the first time.

And then in the following set of lectures after this,

we'll do more hypothesis testing for comparing proportions and

incidence rates between two populations, and again debrief on the p-value.