So we sort the respondents and
non-respondents from low to high based on the estimated response propensity.
And then we divide them into groups.
So in the group of low-response propensity units, we put respondents and
non-respondents into this group that have the lowest propensity.
And then down among the high ones, we've grouped ones that are similar,
in the sense of having similar response propensities.
The nice thing about this is we've created this one variable that we can use for
sorting that is kind of an amalgamation of the different covariates.
So this is a nice summary method of doing things.
So you divide the file into groups, as I said.
5 is popular for the number of groups, but it doesn't have to be 5.
If you've got a big sample, you can certainly create more than that.
And it'll make more homogeneous groups,
in the sense that the range of propensities is pretty small.
So you may recognize this as the same sort of thing that people use for
analyzing observational data, where ideally you'd like to have
had a designed experiment where you randomized people to be treated or not.
But you've just got found data, and
you want to estimate kind of a pseudo-assignment
probability to treatment or control.