Now, let's take a look at total.

So note this, things can go in any direction but

if I compare the standard error before postsratification and

after actually made things worse in terms of the standard error.

The mean got bigger, the coefficient of variation could be actually smaller but

in this case it's not.

But there's no guarantee that you're going to improve estimates

of the mean with post stratification or of the total, although you may.

So let's look at the total, and

we can do that with the svytotal or enroll again, same variable.

So here, the answers there and

I've got a total of about 3.4 million,

standard error of 932 thousand and some.

And if I use the post stratified version dclus1p,

then what I get is this line right here.

And so you see, the total changed a bit, not tremendously,

but standard error did change quite a lot.

If I compare these two values.

Before post stratification, I went from 932,000,

after post stratification, I go to 406,000.

So I cut standard error by over 50% despite post stratifying and

that's on the estimated total.

Now, I can look at cvs and the function that will

do that is called little cv in the survey package.

So here, I just collect together the coefficient variation for

the mean of enrollment from the dclus1 post stratified object,

and then from the post stratified object.

So you see right here, I go from a cv of 0.82 to 0.00110,

so either in terms of standard error or

cv I made things worst by post stratifying here.

If I look at totals here, on the other hand, and I do or

compare the post stratify and the non poststratified object.

I go from .2737 or so to .1103.

In other words, I gained quite a lot

in terms of cv and standard error by post stratifying.

So notice also,

that these two are the same.

The cv on the mean and the total.

Now, why is that?