Greetings. I'm Fred Conrad. I'm a Survey Methodologist and Psychologist at the University of Michigan. Program and Survey Methodology within the Institute for Social Research and also Research Professor at the Joint Program in Survey Methodology at the University of Maryland. Welcome to Data Collection Methods, Web, Telephone, and Face to Face. It's one of the courses in specialization. The first lesson which we're now in is an introduction and followed by a discussion of what I think of as the classic modes of survey data collection. In particular, telephone, face to face and mail questionnaires. And then we'll talk about mode of data of collection. What we mean by mode, there are a number of different notions and I want to make sure we're on the same page. And the concept of survey error and how mode and survey error are related. For example, some modes may increase non-response, and the potential for non-response error. So what is this course? It's a look at the cost-error tradeoffs inherent in all data collection decisions. So you can generally reduce error, we'll talk about the different kinds of error. You can generally reduce error by investing more resources. So by increasing your costs as a researcher, you can generally reduce error or if you can tolerate a certain amount of error, then you may be able to collect the data at reduced cost. This course is a scientific view of survey data collection. By which I mean that it's based on findings in the scientific literature, the published peer reviewed literature by and large. It's not about experiences and anecdotes, although these are a legitimate source of information when designing a survey, and in particular, data collection procedures. But our focus here is on the scientific literature. And it's a comprehensive perspective on survey design that seeks to improve overall quality. Which is a way of saying that we were looking at all the different sources of error that can occur at different points in the survey process, and looking at how they play out. As one increases, the other might decrease, and so on. So it's an overall perspective on design of data collection. What the course is a how-to-do-it course. So this isn't really a credential for going out and doing your own survey. You'll get certain amount of that in other courses in the specialization. But this course is really more conceptual and, again, more about what's in the scientific literature. Of course we hope this provides or serves as a basis for practical activity. But that's not really the focus of the course. It's not a course on how to design questionnaires. There is a questionnaire design course in the specialization. We'll refer to some of those ideas throughout this course, but that's not what this course is about. It's not a review of the entire survey process. So for example, we say very little about sampling or weighting in this course. It's not a course based on opinions or feelings on the topics. As I said, it's really about the published literature. It's not a course in statistics, although statistical concepts and notation will be used somewhat throughout the course. So, what do we cover in the course? Well, in the current segment, the introduction, we'll talk about mode of data collection. What do we mean by mode? And how our modes and survey error are related. In particular, what I think of as the classic modes, face to face, telephone, and mail. We'll talk about one particular source of error, non-response, in a little more detail than the others. It does come up quite often, and while all the sources of error are important to consider, we'll focus on non-response in the introduction. In the second lesson we'll talk about self administration and online data collection, in particular, automated self administration. So we'll talk about automation in interviews, so for example interviewers in face to face interviews often use a computing device, a laptop, or tablet, and this is known as computer assisted self-interviewing. Interviewers read the questions and enter answers directly into the device. And we'll also talk about self-administration, in which the interviewer turns the device over to the respondent to enter answers to what are generally considered sensitive questions. And the idea is that this give the respondent a certain amount of privacy. One of those modes is called audio computer assisted self-interviewing or ACASI, in which the respondent hears the questions over headphones, maybe see the question on the screen and selects an answer and enters that into the device. We'll talk about online or web data collection where there are no interviewers involved, it's completely self-administered. There are a number of issues there involving the different sources of error. Coverage, non-response, measurement. And then we'll talk about mixed mode surveys, in which different modes are combined in the process of collecting data for a particular study. That raises the issue in some mixed mode designs of giving the respondent a certain amount of choice over how they complete the questionnaire. And so, we'll discuss the issue of respondent mode choice. We'll then switch to discussion of interviewers and interviewing, which really breaks down into a couple of components. So interviewers end up doing many tasks actually besides interviews. Probably the most important task besides the interview, that is asking questions and recording answers, is obtaining the interviews or recruiting sample members to become respondents, to participate. So obtaining interviews and then in the case of household surveys, once the sample member or member of the household that has been sampled agrees to participate. There's often a within household selection process in which one of the household members, not necessarily member who the interviewer has been talking to, is selected through a random process. We'll talk about both the recruitment process and the selection of respondents within households. We'll talk about so called interviewer effects, in which the interviewers may introduce two kinds to error, what are known as bias and variance. The point here is that interviewers add value in many ways which will discuss but there are cost as well. The fact that an interviewer, for example, may administer a questionnaire differently from another interviewer can insure a certain of interviewer related variance which is generally not a good thing. So we'll talk about those types of interviewer effects, as well as interviewer effects that might be due to enduring characteristics of the interviewer, such as race, gender, age, etc. Our third lecture topic will be interviews and interviewing, where we will really talk about three aspects of interviewers and interviewing. First is a role that interviewers are crucial for, but which really doesn't involve conducting interviews, and that's obtaining the interviews or recruiting sample members to become respondents. Once respondents have agreed to participate, sometimes a process takes place in which the interviewer recruits from within the household the individual who will actually be providing the answer. So the household is the sampling unit. And once the person who the interviewer has spoken with, either on the phone or at the front door, agrees to participate, a within household sampling procedure occurs. So we'll talk about those roles that interviewers play. Then we'll talk about aspects of conducting interviews and really two aspects of that. One is a type of error that interviewers can contribute to the overall survey error. Now interviewers add value in many ways, but they do contribute error in that they may conduct the interview, administer the questions differently from one another. And that introduces a certain amount of variance or variable error. And they may elicit systematically different responses to a question than other interviewers, particularly responses that concern fixed or enduring attributes of the interviewer such as race or gender or age. So we'll talk about those interviewer effects and how they might be minimized. And then we'll talk about interviewing technique. There are a number of different proposals for how Interviews should be conducted. Probably the most widely used approach is known as standardized interviewing, which interviewers adhere closely to a script and depart really only to administer what are called neutral probes. But there are a number of other proposals that have been discussed and are in use, in which interviewers, for example, can be more flexible and say what they believe is necessary to assure respondents understand all the questions the way they're intended. So we'll talk about those kinds of debates and dichotomies, and for example, how these might lead to or reduce the sense of rapport that develops between the interviewer and the respondent. The fourth and final lecture will really focus on new and emerging modes of data collection and new data sources. The new modes that we will discuss really are mobile web, we'll have spoken about more conventional web surveys that are administered on a desktop computer or a laptop typically. Responding on a smartphone has become increasingly common either in a browser or in a specialized app. The survey industry is really catching up with what the public has kind of demanded through their use of mobile devices. And so we'll discuss mobile web, and also the use of SMS or text messages for conducting interviews, which is a little different than mobile web. Actually doesn't have to happen on a smart phone, but often does. And has a more turn based structure, more back and forth structure than when a questionnaire is self administered on a smartphone app. We'll also talk about administrative records as an alternative data source. To use administrative records requires typically matching the data of a survey respondent or just a member of the public with the individuals identified in the records. And so these are not necessarily easy to do and requires a certain amount of statistical guesswork, informed guesswork as well as consent by the respondents to allow their survey responses and administrative records to be linked. But the great promise of administrative records is that it can reduce burden on members of the public by making it unnecessary to ask them questions, survey questions when the answers already exist in administrative records. And to save survey researchers, such as government agencies considerable amounts of money. The other new data source we'll discuss, or potential data source, it's really not clear exactly how and when this data source might be usable in the way that surveys are, but we'll talk about social media as a possible, either supplement or even replacement for survey data collection. As I said, we don't quite yet know how and when social media might play this role but there is some promising evidence about social media more or less reproducing the results that come from surveys or at least asking certain questions in certain surveys. And so it will explore when social media might be more and less likely to tell the same story that we're able to tell with survey data. So each of the four lecture topics, in addition to the lectures, another important type of content for the course is a set of interviews that I've conducted with experts and leaders in the fields of survey data collection. These interviews concern topics like mixed-mode data collection and online data collection. And they compliment and actually go well beyond the content in the lectures. I personally found the discussion in these interviews to be quite exciting and engaging, and I hope you will, too. So I'm looking forward to sharing this material with you and to learning about your reactions. I hope you'll make good use of the discussion boards throughout the course so we can take stock of your progress. So with that in mind I look forward to working with you, and let's get on with the rest of the lesson. Thanks a lot.