A conceptual and interpretive public health approach to some of the most commonly used methods from basic statistics.

Loading...

From the course by Johns Hopkins University

Statistical Reasoning for Public Health 1: Estimation, Inference, & Interpretation

238 ratings

Johns Hopkins University

238 ratings

A conceptual and interpretive public health approach to some of the most commonly used methods from basic statistics.

From the lesson

Module 4A: Making Group Comparisons: The Hypothesis Testing Approach

Module 4A shows a complimentary approach to confidence intervals when comparing a summary measure between two populations via two samples; statistical hypothesis testing. This module will cover some of the most used statistical tests including the t-test for means, chi-squared test for proportions and log-rank test for time-to-event outcomes.

- John McGready, PhD, MSAssociate Scientist, Biostatistics

Bloomberg School of Public Health

So in the last section we handled uncertainty

in our estimates by creating something called confidence intervals.

And we did this for single summary measures on

individual populations but we also moved to comparing groups.

Say populations vis-a-vis the sample measurements

and putting uncertainty bounds on that.

Looking at confidence intervals for group measures of

group comparisons including main differences, differences in proportions,

risk ratios and incidence rate ratios for example.

So, one of the things we could do with that approach

is ascertain how much difference there was in the populations comparing.

After accounting for this uncertainty we could get upper and lower bounds on this.

And we could also see whether the

possibilities for the differences included a null value.

Whether it be zero for something that's up,

a difference or one for things that are ratio.

And if our results did not include the no value we can rule out the idea of no

association between the groupings and the outcome at the

population level, we can conclude there was a difference.

Well here in this lecture section we're going to

take a slightly different approach using the same information.

And this is all fused in scientific research as well.

What's called a hypothesis testing approach.

And instead of starting with our study results and

building an interval for the true difference between the populations

for comparing, we're going to start with two competing possibilities for

the underlying true difference between the populations we're looking out.

One called the null value, which is, such that the populations we're

looking at, via our samples, we're going to pretend there's no difference in

our summary measure, whether it be a mean, a proportion or an instance rate.

And the other competing hypothesis is the

very broad and generic, there is a difference.

And what we're going to do is, we're going

to put these two competing hypotheses against each other.

Start by assuming the first, that there is no real difference in the

measure in the populations we're comparing and then look at how likely the results

we go in our study would be. If the samples we had

came from populations who were the same or identical on the measure we're comparing.

And what we're going to try and figure out is

whether the results we got in our study were

likely, relatively speaking or unlikely, if the underlying populations

we're comparing were the same on the outcome measure.

And so we're going to get into something called hypothesis testing

where the end result will be something called the P value.

Which measures how likely our sample results are or something even

less likely if our data samples were to come from populations.

Who had the same underlying distributions.

And so we're going to to talk a lot about what is

the P value, how do we interpret it, and also talk about

what we can't learn from the P value because that is equally important.

Coursera provides universal access to the world’s best education,
partnering with top universities and organizations to offer courses online.