Hi, everybody, in this module we're going to start diving into closed-ended questions. This is part of our series about how do we write good questions for surveys? In this module specifically we're going to look at general guidelines for writing all types of closed-ended questions. In the next couple of modules, we're going to look more specifically at two types of closed-ended questions, nominal and ordinal. And we'll talk more of what that means coming up next. But in terms of general guidelines, there's a couple of tips. Tip number 1 is that your question stem should include both the positive and negative side of either or questions. What does that mean, let's look at an example. So here's a fairly typical question, do you favor follow up contacts after a service call? Now in this case, we have two opposing options, we're looking at either favor or oppose. We should have favor and oppose in the question stem. So a better design of this question would look like this, do you favor or oppose follow up contacts after a service call? This is to stop from biasing your responses one way or another. Humans think very semantically, and when we see a word like favor in the question stem, our brains are just going to automatically match that up with favor in the response categories. So you want to make sure you're given the best chance you can to get the true population estimate, not something that's been shaped by how you've crafted the question. Let's look at another example. How concerned are you that you'll get a computer virus while using the Internet, very concerned, somewhat concerned, slightly concerned, not at all concerned? Now the reason to include both poles in this question stem is to avoid priming a response again, right? We see the word concerned in the question stem, and then we have all these concerned categories in the response categories. A better way to frame this question is to add, again, another option within the question stem. How concerned, if at all, are you that you will get a computer virus while using the Internet? This at least give the either/or options to the respondent. Tip number 2 is to develop lists of answer categories that are as complete as possible. Now this sounds obvious, but can actually be kind of tricky, right? So here's a fun example. From which of these sources did you learn about our product, radio, television, newspaper, someone at work? Now, this is an extreme example. Obviously, you can include lots of different response categories here, obviously the Internet. And it's also not a mutually exclusive set of responses. So you could have learned about the product from someone at work and also from the newspaper. You want to make sure that you get as full a list of responses as you can through pretesting focus groups and these other techniques that we have talked about. So that you're giving the respondents the best chance that they have to actually answer the question. Now, this leads us into a perennial question about writing survey questions, which is whether or not we should include a don't know answer option, right? And this has lead to many debates in the literature, lots of people one way or the other. Let's talk for a second about the pros and cons of the don't know. And in this case, we're talking about, for instance, if we added a don't know at the end of this and allowed the respondents to pick that as opposed to some of our other answer categories. Now why should we include a don't know, it's because we're not trying to get false positives. We don't want people to pick something just because they're being forced to pick. And don't know gives them an option to not pick something, when they don't really know the answer, leading to the responses that we do get being better estimates. However, on the con side, people might pick the don't know response category, because they're trying to satisfice. Again, they're trying to get through the survey as quickly as possible, or they're just trying to answer questions without any risks to themselves. So in fact we're missing data that they might be forced to answer, if we didn't give them an easy out. Now what does the literature say about this? The literature actually is fairly divided. In fact, both of those things happen, right? It really, then, depends on what your goals for the survey are. If it's really important for you to only get strong feelings about some of these response categories, you might include a don't know. If, however, you are asking questions in a way that you fear that people are going to satisfice, or they're going to skip response categories, adding don't know will of course encourage people to do that. Another way that you decide on this is thinking about your survey mode. In a phone interview, for instance, a phone survey, you wouldn't include the don't know option, people will supply it on their own. On paper it might be there, and people could just check a bunch of boxes, and you don't have a chance to actually check their data. In web surveys, we might include a don't know category and then follow up with follow up questions, if they answer too many don't knows. So it's really a matter of what are the purposes and intentions of your analysis, as well as what's the mode of delivery that you're going to use for your survey. So tip 3 for writing generally close-ended questions, develop lists of answer categories that are mutually exclusive. We already saw an example of this in one of our previous questions, right? Where we could learn something from a friend, but also from a newspaper. This happens all over the place in closed-ended questions, a really common one is age categories. For some reason there are some people, I'm not one of them, but I have friends who are, [LAUGH] who get driven insane if they see categories like this, 20 to 30, 30 to 40, 40 to 50. It's better to have different types of range categories. So you might go 20 to 29, 30 to 39, or you could include 20 up to 30. There's different ways to write this, but you want to make sure that it's clear to the respondent who happens to be 30, which of these they're supposed to check off. More substantively, you can think about how the mental models a respondent has about the response categories may not match up with technical features of the site. So what do I mean by that? Let's look at this question, which of the following features of the website did you use, help chat, FAQ, help assistance line, issues database? Now, it could be that each of these separate response categories are very specific technical parts of your website, or they might be very meaningful to you as a company or to you as UX researcher. But to a respondent, they might confound help chat and help assistance line or help assistance line and FAQ. They may not be able to clearly distinguish between these things. So you really want to make sure that these categories are mutually exclusive from the perspective that the respondent is going to understand. Tip 4 is that you want to pick the answer formats most appropriate for your measurement intent. That sounds kind of a wordy way of saying, there are certain types of modes that you have that are going to allow certain types of question formats. So, for instance, what you can ask on paper, phone, through interviews or on web surveys, all of those are going to be very different. And some of the differences could be, for instance, one answer versus multiple answers. By that I mean, do you force the respondent to pick one thing or can they pick multiple things? A lot of that is going to be changed by survey mode. Let's look at some examples of some common types of survey questions. I'm going to constrain this to web surveys, because my sense is for UX a lot of what we do is through web surveys. Some of these are applicable to print surveys or to phone surveys and some are not. This is the standard set of questions that the Qualtrics survey software allows you to have. And you can see, really simple, the multiple choice, a matrix table, text entry, slider, rank order and side by side, right? All of these are very common formats for written survey questions and for closed ended survey questions, except for text entry, which is, of course, open-ended. Each has a lot of different functions and features that you can add to it and a lot of different options. So as you can imagine, you can go into multiple choice and you have rank order multiple choice, or you have nominal multiple choice. Or you have multiple choice that's horizontally laid out versus vertically laid out versus in columns. Each one of these is a very complicated set of actual questions that underlie this, but there's other broader categories. Now these are the common ones, each survey's offer also allows you to ask some more uncommon types of survey questions as well. So again, this is what Qualtrics calls specialty questions. A really common one that you might use in UX research is the net promoter score, we're going to talk about that more in another module. But there are some others here that are really great. So for instance heat map allows you to actually show a webpage and have people hover over the parts of the webpage that they respond to. Or a hot spot allows you to predefined areas of a webpage and ask questions about that predefined area. I'm not going to talk about each of these specialty ones very much. There's a module where I have you go through multiple survey applications, different specialty questions, and respond to those. But however, most professional survey applications offer a really large variety of these. The opportunity to ask very complicated, innovative questions through these different survey applications is great, other modes are more limited. For instance, you can't really do a heat map question in a phone interview. And it doesn't make as much sense, for instance, in a print survey as well. So you really have to think about what mode you have and what kinds of questions that allows you to ask. As you can imagine, there is a research agenda, especially in the survey methodology literature, around the user experience of question types and answers, right? So that lots of people have looked in depth at different ways of asking multiple choice questions, different types of matrix questions. Some of the question types that we just looked at haven't been really addressed by that literature yet. While there are people who've looked at net promoter score for instance, very few people have looked at heat maps or other types of survey questions that are on the more specialty side, to see how their responses compare with other types of responses. That's great, hopefully that literature catches up with the new types of questions being asked, but there is a great amount of user experience research around multiple choice, text entry, or no questions and things like that. So here's an example of some of that user research that we've seen. Overall, how satisfied or dissatisfied are you with your current job? Notice that that's a good question stem, it has both the either/or categories within the question stem. Now these are two questions with the exact same sets of response categories available to respondents, but asked in two different ways. So in the first question, the response category is laid out in a column of single choice options that the person would make. On the bottom, it's a slider, right? Sliders are actually relatively common in survey questions currently. You have to really think though about what is the user experience with either one of these? A lot of research shows that sliders have more drop off, people are less likely to answer a slider question than they would be the first question. The other problem with sliders of course is if you're asking a question over a mobile device. Mobile devices, especially if you have giant banana hands like mine, it can be really hard to actually operate that slider on a mobile device. So you really want to think about what mode are my users going to experience the survey in and what type of response category is going to make the most sense. There are lots of different ways that we could ask this question again. Another common slider is the five point satisfaction rating, right? So overall, how satisfied or dissatisfied are you with your current job? The exact same question, in this dimension, these five stars are really just a type of slider, right? This is the exact same data or concept we're trying to get at with those other two questions, but the response categories can be so different you really want to think about what's the right response category that matches up with the concept that you're trying to measure. Now, five start ratings are so ubiquitous in online surveys, that's pretty good measure to use. But how does it compare against the column of options that we saw on the last slide? There's less research on that, so you might want to do some of you own pre-testing. So in summary, there's a few kind of general rules for how you write closed-ended questions in surveys. One is to include comparison points in the question stem. You want to make sure that your question stems include both options to avoid bias. You want to have a complete set of answers available to your respondents. You want to make sure that you create mutually exclusive answers to help clarify what the mental model of the response categories for the respondents are. And to make both you and the respondents clear that they're answering the question the best way they can. You also want to think about the answer response format that you use very carefully. Any specific type of question, multiple choice, the slider, the text entry, all have a dizzying array of interface options for how you actually lay out those response categories. You want to be able to know the literature a little bit to know which ones work and which don't. Match them to the mode you're going to use for delivering your survey, and think about what's going to elicit the best data that you can possibly get from your respondents. In the next two modules, we're going to talk about specific types of closed-ended questions, nominal closed-ended questions, which are basically non ranked categories of responses that you can give. And ordinal closed-ended questions, which are, oppositely, ranked ordered questions that you can ask to your respondents.