Let's take a look at a practice problem.

A 2010 Pew Research foundation poll indicates that

among 1,099 college graduates, 33% watch the Daily Show.

An American late-night TV Show.

The standard error of this estimate is 0.014.

We are asked to estimate the

95% confidence interval for the proportion of

college graduates who watch The Daily Show.

Let's start by parsing through some of the information we are given.

The 33% who watch the daily show among the, these

observed college graduates is going to be our p hat 0.033.

P hat stands for sample proportion, just like x bar stands for sample mean.

And we are also told that

the standard error of this estimate is 0.014,

so let's take a note of that as well.

By now, we know the generic formula for a confidence interval for any estimator.

It's always a point estimate, plus or minus a margin of error.

In this case, our point estimate is a p hat, and then we have plus or

minus a critical value, z star, times our

standard error, that make up the margin of error.

The p hat is 0.33 plus or minus 1.96

for the critical value, times the standard error that we're given in the problem.

Gives us a margin of error of 0.027 or 2.7%.

Adding and subtracting that to our point

estimate, we get a confidence interval that

says that we are 95% confident that

between 30.3% and 35.7% of college graduates watch

the Daily Show.

Just like with confidence intervals, we can apply the

same framework for hypothesis testing to different estimators, as well.

And again, as long as the estimator is

unbiased and has a nearly normal sampling distribution.

So if that's the case, we can use the z statistic as our test statistic, that we

always calculate as a point estimate minus the null

value, kind of like the observed minus the mean,

divided by some standard error.

And we're not, again, once again, going to

get into the, calculating the standard error for

these different point estimators, but that's something we're

going to focus on in the following units.