[BLANK_AUDIO] So far we've talked about three of four stages in the survey response process. We've talked about comprehension of the question, retrieval of information, judgement and estimation, and in this segment we will talk about reporting an answer, that is selecting an option from among the set provided to respondents. Sometimes this is called the mapping and reporting stage, because what essentially happens is respondents map a judgement, the results of the estimation process and all that has preceded it, to one of the answer categories that have been provided. The results of the estimation stage may not be expressed precisely. And if they are, they may not match one of the options that's provided. So this requires some sort of transformation from the information that results from the estimation process to one of the options that are provided with the question. And this transformation can introduce error. And the current segment will present a series of examples of the types of measurement error that can occur at this stage in the process. The first example has to do with the spacing of response options in a scale, an ordered response scale. If you think about it, there's a distinction between the midpoint conceptually in the scale and the midpoint visually, assuming that the scale is presented on a web page or on a paper questionnaire. And the respondent's mental representation of the scale will probably involve both. Tourangeau and his colleagues compared the endorsement of the middle option when the scale was spaced evenly versus unevenly, as can be the case in a web survey with different web browsers presenting the question and the options differently. The uneven spacing made the scale points on the right conceptually look more central. Here's what the two scales looked like. The question is, "During the next year, what is the chance that you will get so sick that you will have to stay in bed for the entire day or longer?" The scale on the top is evenly spaced with the option even chance right in the middle visually. So the visual and conceptual midpoint's are aligned in the top version of the scale. In the lower version, where the spacing is uneven, "even chances" is now to the left of center visually. So the three options that are conceptually to its right are now more central. So let's see what happens to the number of respondents endorsing an option from the right side of the scale. When the scale was evenly spaced 58% of respondents selected an option from the right. But when it was unevenly spaced 64% of respondents selected an option from the right, reflecting the visual appearance of the scale. So let's turn to an example now that shows how measurement error can be introduced in the mapping process, the reporting and mapping process, for items that have unordered scale options. Krosnick and Alwin embedded in an experiment in the General Social Survey in which respondents were asked to select qualities of children that were most desirable for their children. These were presented visually on show cards in a face to face interview. So the interviewer handed the respondents a card with the actually 13 attributes or qualities that a child might have. For a third of the respondents the order was reversed. So there was a standard order and a reversed order. Here are the 13 qualities. You can see they start with "has good manners," "tries hard to succeed." The final one is "is a good student." So the question is whether the early options are endorsed more frequently than the later options. And if so, that's what is known as a primacy effect. Here's the data. When the options are presented in the standard order, the first three, manners, success, honesty, are endorsed substantially more often than when they are presented in the reversed order and appear respectively in the 13th, 12th, and 11th positions. And for the item that is presented last in the standard order, studious, it is endorsed ten percentage points less often in that order than when it appears first in the reversed order. So this is a clear primacy effect. And it doesn't depend on the particular options. It really depends on just their order. Krosnick and Alwin called this or called the mechanism responsible for primacy survey satisficing, which is the tendency to select the first item that's good enough. There's a parallel or complementary response order effect when the questions are presented auditorially or when they're spoken. And this can occur with very short lists of options. This is demonstrated in a study by Schwartz and colleagues, who report a preference for the final item, this is the final item or option spoken by an interviewer, when the list has only two such options. So the question was, "What form of government do you prefer?" And the question is whether the options presented second are endorsed more frequently than when they are presented first. And as you can see, when the option is "authoritarian government," it's endorsed 11 percentage points more often when it's presented second than when it's presented first. The same is true for "democratic," it's endorsed nine percentage points more often when presented in the second or final position than first. But the point here is that authoritarian appears to be the relatively popular option when it's presented in the second position. The explanation for recency effects has been attributed to working memory capacity. In particular people are forgetting the earlier options and better remembering the more recently presented options, hence the name recency effect. This idea has been explored by Kn‰uper, who controlled for working memory capacity using respondent age as a proxy, taking advantage of the well-known and unfortunate fact that people's working memory capacity deteriorates as they age. She re-analyzed results from Shuman and Presser, who had done a study to look at response-order effects in questions about housing. And she found that older respondents showed a larger recency effect than younger respondents who barely showed any recency effect, suggesting that working memory capacity and forgetting earlier options is probably at least part of the explanation for recency effects. Another type of question that can demonstrate error in the mapping stage are questions requiring open numerical responses, in particular when respondents provide prototypical or rounded reports. This may indicate imprecision in their underlying representation, may simplify the mapping task for them by creating kind of categories, it may signal uncertainty, and it may signal uncertainty, and it may signal embarrassment. Here's an example from data from the American National Election Survey analyzed by Tourangeau, Rips and Rasinski. The question concerned preference for presidential candidates on the 100 point feeling thermometer. And the analysis classifies the numerical responses selected on the feeling thermometer into multiples of 10, 15, or 85 and other values, where other values are essentially the unrounded values. As you can see the vast majority of respondents whether answering about Clinton or Bush selects numbers that are multiples of 10. A large number of respondents also selects numbers that are either 15 or 85. So, respondents are rounding extremely often here, presumably indicating to the researchers that they are taking 100-point scale and breaking it into a more manageable scale with the discriminations that are meaningful to them. The final example of error in the mapping and reporting stage concerns respondents answers to sensitive questions. It's well known that when respondents self-administer questions, they're more likely to answer truthfully, to disclose more and to report fewer socially desirable answers. This is well documented in the study by Tourangeau and Smith, in which they presented sensitive questions on topics like drug use under either interviewer or self administered conditions. These data are ratios of estimated prevalence of cocaine use and marijuana use under self administration and under interviewer administration. Numbers bigger than one indicate more prevalence, numbers bigger than one indicate greater prevalence under self administration than interviewer administration. All of these numbers are greater than one, suggesting that people are more likely to report an undesirable, in this case illegal, behavior under conditions of self-administration. Why is this happening? One possibility is that respondents are editing their answer before they report it. This has been called motivated misreporting and evidence comes from at least two findings. The misreporting is in one direction, the socially desirable direction, and self-administration affects answers to sensitive but not non-sensitive questions. It's most likely, therefore, that respondents are editing a formulated response, the result of the earlier three stages, rather than selectively retrieving positive attributes about themselves. One piece of evidence supporting this view is that response times are longer for sensitive than equally demanding non-sensitive questions. For example, Holtgraves observed longer response times when the introduction emphasized the social desirability of questions. So this suggests that respondents go through an edit step where they consider, "Do I want to edit this answer or not before reporting it?" So even if they don't change their answer, they've gone through this edit step. This concludes our discussion of the four stages that a respondent would need to go through to provide a thoughtful answer. So the next segment concerns types of survey questions.