[MUSIC] Hello, this part of the lecture continues the discussion started in the previous one. Interview questions that precede test tasks should be written in accordance with research questions. The same applies to all subsequent sections, so I won't repeat myself further. Pre-test interview questions I usually use to get more information about a participant, the information relevant to the activity supported by the app, her experience and its domain. This part of the study also aims at making participants feel comfortable and at ease. Answering questions about previous experiences help to establish group rapport, which is incredibly important for the success of the whole test session. Note that interviews within guerrilla usability testing is semi-structured. We discussed rules for writing good interview questions in the lecture titled Ethnographic Interviews in preparing for the study of the second week. Please watch it once again before starting to work on your test plan. Tasks are in very hard of any usability task line. They vary in the degree with which the goals are defined. Close ended tasks imply very specific goals. An example of such a task is presented on this slide. Open ended tasks in opposite imply loosely defined goals. An example of an open ended tasks nearer. Take this one close ended and open ended tasks can possibly impede in your task plan. It fully depends on the usage context of the app your going to evaluate. A typical usability test plan consists from a sequence of types of both types. But since usability testing is a quantitative method you can employ the power of emerging design. One of the advanced techniques that I do not recommend you to use in your practical task but i want you to know about is an interview based task design. This technique implies the absence of a pre-decided sequence of test tasks in the conventional phase. A moderator co-creates tasks in the course of a test session. On the basis of information gathered through a pre-task interview. The use of this technique requires a certain level of skill but as a result, it helps to increase the validity of the study because the co-created task better match the previous experiences of participants. All right, back to typical test tasks. Each task of your test plan should include information presented on this slide. Pre-task questions are designed to figure out the experience of a participant, regarding a particular task. An entry point specifies the part of an interface, for example a particular screen and its state, from which participants should start performing the task. I think that the tasks mirror section doesn't require any qualifications. Besides, we will discuss the quality criteria of tasks scenarios In a few slides. Materials contain information needed in the course of the task performance. For example, the aforementioned task of transferring money to a friend using a mobile banking app had bank requisites in this action. This section requires special attention because it's not a good idea to use real data of a participant in all cases. Your goal is to make tasks and experience for participant as pleasant as you can. It includes thinking about what the participant need to do to return everything to its place after the session has ended. In many cases you can use fake information for that without harming the validity of the study results. For example you can create accounts beforehand and provide participants. With logins and passwords, if you don't need to test the sign up task, you can ask participants to enter fictional data even if it is names, birthdays and so on, where this won't affect task performance. Criteria of task completion, describe conditions that should be met To call the task completed successfully. It may seem doubtful for you, but believe me. It's necessary to explicitly specify these conditions because sometimes participants do things that are not intended by the task. But think that they have done everything right. Recall our discussion about the effectiveness component factor of feasibility. A user should not only achieve the goal, but also understand that she has achieved it. That is why I strongly recommend to complete the task of each tax scenario with words. Let me know when you are done or single. If a participant has completed a task according to the specified criteria but says nothing and continues to work on it or in opposite the participant says that she has completed the task but according to the objective criteria she hasn't. It clearly indicates the existence of a usability problem. The last section for us to ask questions is designed to firstly check that a participant is sure that she has completed the task. In the cases were a task goal includes a subjective component. For instance, a participant needs to enter card information a certain way. Were learn something. The post task questions is the only way to understand whether the participant has achieved the goal or not. By the way, the criteria of task completion section should contain both objective and subjective conditions of successful task completion. Of course if they are presented. Secondly, post-task interview questions are aimed at collecting data according to your research questions that weren't covered by a test task. For more examples of research kind of questions, see the test plan in materials of this week. And thirdly, these questions are aimed at collecting additional data by this customer perceived problems. The problems that a participate noticed, the question presented on this slide is not the only way to do that. Another way is to use a post-task question, for example Single easy question. The use of question ask is pre-standard for quantitative feasibility studies. In case of conducting the study in person, in addition to gathering subject if quantitative data. You may ask a participant why she did array the task this way. Jeff Sauro, the author of single ease question, recommends to ask participants when they provide rating of less than five but I'd like to remind you that testing is a qualitative. So I recommend that you use the interview question from the previous slide instead of SEQ in your practical task. Returning to test scenarios, I would like to discuss a quality criteria proposed by Vlad Golovach. First of all each scenario should be written so as to prevent its misinterpretation by participants. Testing understand-ability of test scenarios is one of the purposes of pilot testing. Each task scenario and related materials should be complete. That is, provide participants with all information needed to perform the task. Each task scenario should be relatively short. It means that it shouldn't contain unnecessary information. And finally each task scenario shouldn't contain information that prompts participants on how to perform on the task. For example, avoid the use of user interface terms. Task scenarios should be written in subject domain terms. Most often post test interview is aimed at two things. The first is problem prioritization. In the lecture on usability of data analysis process, I was talking about different perspectives on problem severity. One of them was concern with this objective assessment of problems by participants. By asking a participant to name things she didn't like, the most significant or difficulties. You are able to understand what interaction problems should be considered as the most severe from the participants' point of view. Note that while answering these questions as well as post test interview questions, you shouldn't stop participants from moving around the application user interface. It will help to refresh their memory of what they just experienced. The second thing is concerned with turning the session into a positive. Ask participants what they really liked about the design of the app. It's kind of important, because informative usability evaluation in general is aimed at discovering bad things. The participant along with the team needs to focus on the positive for a few moments. That's all for now. I look forward to seeing you in the next lecture. Thank you. [SOUND] [MUSIC]