[MUSIC] Hello and welcome to a new week. Contrary to the previous one, this week is dedicated to the discussion of formative empirical usability analytics. For the most part of it, we will talk about field research. By field research I mean the kind of studies that take place outside your office building. In particular, observational field researcher is aimed at discovering interaction problems and Guerrilla usability testing. But why field research exactly? There are two reasons for that. The first reason is related to the quality of empirical studies, such as ecological validity. Actually, we already touched on this topic in our discussion without mentioning the term, when we were talking about the classification of user research methods, in particular the context of product use. If you were to compare alternative designs according to a set of usability metrics, for a more fair comparison you would need to create the same conditions for measuring these metrics. The irony is that the context of use is extremely complex, and it is almost impossible to achieve the same conditions for several tasks in real world. Something always changes a little, the experience of participants, the quality of connection to the network and so on. What is a challenge for a quantitative approach is a huge advantage for qualitative one. The variability of the real world is very useful for us, since it allows us to identify interruption problems that will be difficult to uncover in a laboratory. For example, cheaper and older devices have less computing power that makes user interfaces of all apps running on these devices work slower. Longer response time can cause usability problems with multiple tapping on the same UI control, that leads to the indication of the same function multiple times. Due to the fact that labs usually own high end devices, it is unlikely that such a problem can be discovered in the lab environments. You need to strive for ecological validity of your studies. Field studies are more ecologically endowed than lab ones. The use of participant's devices studying interactions in their natural habitat and employing actual motivations among others are factors that ensure ecological validity of method design. The closer the design of research of the app is to real world context, the more useful data you will get. The second reason why I choose to talk about field research, is a way to do the applicability of both, field visits and Guerrilla usability testing. The combination of these methods covers almost all project situations and phases of your design process, but before we start discussing the applicability let me draw two pictures for you. Imagine yourself as the designer of an app for tablets which purpose is to help shop assistants of a large chain of stores respond faster and better to customer questions. The application has already been designed, developed, and sent to test operation in several stories. In order to evaluate usability of this app in real world context, we've arranged observations of shop assistants at the time of their communication with customers. Of course having previously agreed this with director of the store about it. In a search situation, you cannot interrupt the shop assistant and the customer and ask questions. You can only observe, pretending to be an intern and take notes about everything that interests you. This is a typical situation of observational field visits. For guerrilla studies, everything is a little different. You go, for example, to a cafe and offer everyone interested a free drink, and in exchange ask them to perform a couple of tasks using a mobile app. Of course, it's another app. Despite the fact that you choose people from the target audience and sometimes uses their old phones, the study setting is more artificial. Participants perform task given by you. For instance, you can test a new feature of a service with a very broad target audience such as Dropbox, [INAUDIBLE]. There is probability that the user may encounter the task of using this feature in a cafe, but this probability is not so great. Despite that fact, the design of user relation to the app is pretty close to real usage context of the feature. Field visits aimed at discovering interaction problems are most suitable when the context of use cannot be simulated, like in that example with the mobile app for shop assistants, or when simulating the context will take more effort than conducting observations. For instance, you can simulate GPS signal to test navigation software. But in this case, you also need to provide a virtual environment to make this experience more real, which will take too much effort. Field visits are best for evaluating usability of apps for which features of the usage context, such as a user's location, work activities and integration with physical devices and so on, play a detrimental role in interactions. Imagine an app that allows you to see a restaurant menu and even make an order when you are in the restaurant. Field visits are well-suited for emulating such context-aware interactions, but of course the method has its limitations. Firstly, field visits require the use of workable application, and that already released on the market or at least the software prototype. Secondly, there has to be a possibility to observe an interaction. If you don't have access to the place where the interaction occurs or the interaction comes up in unpredictable periods of time, it's impossible to conduct observations. Thirdly, the method requires an experienced moderator. I'd like to remind you that despite all its advantages, field visits is a labor intensive method. Guerrilla usability testing, in opposite, is a cost effective method. It's best for guerrilla usability of mobile apps, whose audience is quite broad, or can be easily found in a particular place. For example, if you design an app for Burger King, I bet you know where to find participants. Guerrila usability testing, opposite to field visits, allows you to test designs of any level of readiness, from sketches to apps introduction. This method has limitations too. Firstly, there has to be a possibility to simulate the app's context of use. For instance, guerrilla usability testing cannot be used to evaluate the mobile app for shop assistants. In opposite, imagine you test Google's Mall Director, the screenshot of which is presented to the left. It works as follows, when a person approaches some mall, a notification appears directly. A tap on this notification, directs the person to the screen that allows her to find shops and presents an indoor map of the mall. This kind of interaction can be easily emulated with guerrilla usability testing. You just catch people at the entrance to the mall and give them tasks to perform. Secondly, if an app's target audience isn't big or locally distributed, you probably won't find participants so easily. In this case, you need to use other methods of recruiting. When you had to guerrilla, it won't be a problem for you to conduct the lab usability testing, too. All right, enough with the applicability. Let's discuss the structure of these two methods. To conduct field visits search for participants should start in advance, since observations sometimes last for several hours depending on an activity being studied. It isn't easy to find people who are ready to allocate this time. The process of recruiting participants is described in great detail in the lecture of the second week with the same title. At the preparation phase, it's necessary to choose those parts of a user interface that are of great interest for evaluation and prepare notes in a certain way. This can be done in several ways, which we will discuss in detail in the next lecture. Analysis of observational data is desirable to begin before the end of the data collection phase. Analyzing the data immediately after the end of the observation session, you save your time. The matters are still fresh, so you rarely need to turn to recordings of the study sessions. The preparation towards guerrilla testing differs towards field visits. It consists mostly from creating a task plan, that we will this class create this week. I would like to note the phase specific for usability testing, a pilot. The idea is extremely simple. In order for usability testing to be successful, you need to test the testing itself. We are not interested in the results of the pilot sessions. What's available to you is fine-tuning of the testing design. It isn't necessary for the pilot to look for participants from an app's target audience. It can be conducted on colleagues or friends. The conduct of the pilot sessions allows you to identify the shortcomings of the test plan and approximately estimate your time of the future sessions. According to usability lab experience, each session of combat testing is usually longer, about one and a half times a pilot session. Since we're talking about guerilla usability testing recruiting occurs simultaneously with the data collection. The phases of data analysis and redesign, marked with dashed lines, imply making data informed design changes during the course of the study. Recall our discussion of different variations of the method, paper prototype testing and rapid iterative testing and evaluation at the beginning of previous week. Of course you can postpone both analysis and redesign to the end of the data collection if you want. And a final third on data analysis. Even in case of a plan both methods to evaluational purposes only, you'll gather data on usage context not only explaining interaction problems, but also expanding your knowledge about the life of users users. To analyze these two streams of data, you should use different processes. The left one is examined at the end of the second week, the right one at the end of the third. To sum up, field visits are best suited for usability evaluation of apps for which features of the usage context, such as a user's location, work activities, integration with physical devices etc., plays a determining role in interactions. In field visits, the course of a participant's activity guides the moderator through the whole study. The study's setting is close to the field research context and allows to gather rich data about the life of your users. The setting of guerilla usability testing is a little bit more artificial. Participants perform tasks given by a moderator. The method is best for evaluating usability of mobile apps whose audience is quite broad or can't be easily found in a particular place. We will discuss how to prepare ourself for and conduct both methods the rest of this week, thank you. [MUSIC]