Hi, in this segment we'll look in more detail at sequential mixed-mode designs which are also known as mixed-mode follow ups. Again the goal is to reduce nonresponse. So by using one mode as a reminder for another mode, importance of the study may be intensified. Examples include contacting respondents who were originally invited to participate in a mail questionnaire by phone. Contacting by phone to remind them to fill out the paper questionnaire. And related goal is to reduce the cost of nonresponse follow-up by doing the initial contact in the least expensive mode possible and then following up with increasingly expensive modes. So that the most expensive mode is just used for the most resistant sample members. The downside is that the risk of mode effects is really there because it's really hard to separate out the effects of mode from the substantive responses in these cases. This is a study by Dillman in which he followed up via phone or mail with respondents who were initially invited to take part via mail, phone, IVR which is interactive voice response. Which in this case meant that a recorded voice presented the question, and respondents used a telephone keypad to enter their answers, and web invitations. If you look at the blue bars and compare those to the yellow bars, you can see that additional cases, additional participation was produced by a follow up in a different mode. So, those who were initially invited by mail actually responded at quite a high rate, about 75%. Following up by phone with those who didn't respond increased the response rate to sightly above 80%. Similarly those who were invited by phone responded at about 45%. And inviting the nonrespondents by mail also increased overall response rates to about 80%. One thing that's a little unusual here is that the response rates are higher for mail than for phone. Dillman and his colleagues attribute that to the fact that the initial mail invitations contained a $2 incentive, whereas phone interviews promise an incentive upon completion. So that's just not as strong an incentive for respondents. So the phone initial response rate was lower than the mail initial response rate having to do with the kind of an asymmetry in the way the incentive was offered. But you can see for other combinations of modes, a follow-up in the different mode also increased response rates. So the main finding is that offering modes sequentially can improve response rates. But the authors note that this didn't really resolve demographic biases due to nonresponse so that the same kinds of people are not responding across the modes. And that hints at a kind of disconnect between increased response rates but no reduction in nonresponse error. And then finally they note that mixing modes can mix mode effects. So that the aural modes, phone and interactive voice response, response in those modes tended to give more positive responses on bipolar scales than when those scales were presented visually via mail or web. So that tends to be attributed to mode. There isn't really any other way to account for it, except for fairly complex arguments involving differential nonresponse. So an increase in response rate, probably not a reduction in nonresponse bias, and introduction of a mode effect are really the outcomes. A related study by Millar and Dillman attempted to increase web response rates to the level equivalent to mail response rates, on the assumption that web has many desirable properties. And we'll talk about web surveys in more detail in the next lesson. What they did was they compared mail to a choice of mail or web and among many other combinations, to offering web with what they called email augmentation, which they followed up the web invitation with an email message that included a link to the web survey. So, what they found was that offering the choice of mail or web didn't improve response rates, similar to the findings from the Fulton and Midway meta-analysis that we just mentioned. And if anything, it reduced response rates. But they did find that when they invited sample members to participate by web and then followed up those who didn't respond by email with a link to the web survey, that the resulting response rate for web respondents was indistinguishable from the response rate for mail respondents. So that's an example of how a mixed mode follow-up invited by web, invited to take part in web with a follow up by email increased response rates for the web mode, in this case to a level comparable to mail, that kind of being the gold standard. We've mentioned incentives a number of times. They do play a role. In a study by Beebe and his colleagues, they looked at the role of incentives. Specifically, they mailed paper questionnaires to a sample of Medicaid recipients. This is a type of health insurance in the US. Half received a $2 incentive in the envelope. The response rates with the incentive were substantially higher, nine percentage points higher, 54% versus the group with no incentive whose rates were 45%. They then focused on nonresponse for minority, ethnic members of the sample and tried to increase response rates for those subgroups with a telephone follow-up. If these respondents had received an incentive in the initial invitation, then with a telephone follow-up, the response rate was 69%. But if they hadn't received an incentive in the original invitation, the follow-up response rate was 64%. So the effect of the initial incentive was reduced in the follow-up. The difference was between 54% and 69%, for those who had received the incentive. That's a difference of 15%. But for those who had not received an incentive, the follow up was that much more effective. It was about a 20% difference, 19% difference. So, overall response rate, there was an increased response rate with a telephone follow up which offset the cost of the incentive. Just want to say a word or two about some of the other designs that we mentioned earlier, mixed mode designs. So one is switching modes within the questionnaire, such as one sample, one time point but different modes for different parts of the questionnaire. So, the best example is a face to face interview in which the interviewer collects information for sensitive topics through self administration. That is, turns the computing device over to respondents who either see the question on the screen or hear the questions over the headphones, so-called CASI or audio-CASI. And longitudinal mixed mode surveys in which one sample is measured at multiple time points but same sample members are measured with different modes at different time points. So, for example, starting the data collection in one mode and then switching to generally a cheaper mode once the sample members, the households have committed to participating in a longitudinal study. This reduces costs. So for example if the initial mode is face to face and then for subsequent waves by telephone, this reduces cost. Telephone is cheaper than face to face interviews. But it does confound time and mode effects on measurements. So for example, if there are differences between the initial interview which we'll say in this example is conducted face to face, and follow up interviews, subsequent interviews conducted by phone. It's hard to know if the differences are due to the passage of time, or whether they're due to the different modes. Examples of longitudinal studies that use this kind of mixed-mode design are the panel study of income dynamics conducted at the University of Michigan. And the current population survey, the primary labor force survey of the U.S. conducted by the Bureau of Labor Statistics in the US. And then parallel or separate samples for being measured with different modes. Mainly used for comparative studies to accommodate regional survey traditions or practical constraints, differences in coverage or literacy. So in one region or country the penetration, for example, of Internet may be quite low, and quite high in another country in this multi-national study, that would be a reason to use different modes. Literacy may vary. So a visual mode in which questions are presented textually, might not be acceptable for one country or region but might be fine in another where literacy is higher. There may even be different questionnaires used in the different regions. So mode effects may result, and it's again, hard to know if these differences between countries or regions are due to mode or due to the many other differences between the populations in these regions. Examples are the international social survey program, the European social survey, and the behavioral risk factors surveillance system. They all use different modes in different countries or states. That concludes our discussion of mixed mode designs. In the next segment we'll turn to non-response and non-response error, which is an important source of survey error that we've alluded to a number of times. And we'll just go into somewhat more detail to kind of set us up for the rest of the course.