I mentioned that one thing we want to do whenever we can is design experiments where we can get nice specific data about either usability or motivation. So we have better hypothesis and a stronger foundation going into field analytics, where we can really only observe the what of what customers actually do, and we don't really have a strong sense of the why, unless we have this nice foundation. So let's quickly recap how we do that with usability, how we should basically precede our UX analytics with qualitative testing where we isolate and get a nice strong foundation and how users behave. We iterate quickly from a lot of impossibilities to something we think is pretty good for the 1,000 people instead of the 10 that were testing with in this control setting. So the basic idea is we're transitioning from these things, and we need this foundation, and we have usability hypothesis where we say, if we put a certain set of patterns in front of the user, those patterns are going to make it really easy for them to act and get our proposition, realize individual rewards from progressing through our user experience. So really what we're asking is, how do we shift this curve? How do we improve it specifically with regard to usability? The way we do that is we start with really great user stories, and those are really the centerpiece of our testing. So for example, with Trent, we have this general idea, and we've parse this out with our customer analytics of this problem, this alternative. We're going to do this proposition, and then we slice up the user experience that we're going to deliver in software with nice, strong, agile user stories. So for example, this is an epic story which unpacks into a series of counts stories. The stories identify the persona, say what they want to achieve, what their individual atomic goal reward is, and then we have a testability. How would we assess whether the user got this or not? One thing you should do before you lean on your field analytics and you have a lot of distance is run qualitative usability testing. You unpack the steps that a given user will take, and you control for motivation and just test usability. How do you do that? Well, if you join me for hypothesis-driven development, you've seen this seven steps model. If you didn't, there's course resources where you can read about that. The basic idea is that a user will go through these various steps for every given interaction. So they have some goal, which is in our terms here really equivalent to the job to be done or the problem scenario. In our previous example, Trent is trying to get parts to a job site, a replacement part or parts. They have this need, whether these are software, or they call it the office, or whatever they do that exists. That's their goal. In a usability test, we're going to have a prototype or written software where we're trying to control for motivation and just test usability. So in this reflective layer, our users decide among all the alternatives available to them. What we're hoping is that our proposition is better enough than the alternatives. They're motivated to have our proposition. So really, primarily, this step deals with motivation which we're trying to isolate. So in user test setting, we have a subject, and we don't ask them, "Hey, would you like to use this software to this?" We're telling them, "Hey, would you show me how you look up this part that has this partner on it using the software that I'm going to put in front of you?" In this behavioral layer, these mostly unconsciously specifies the set of steps because they've used software before, they have certain learned behaviors. What we're asking ourselves is to be put up pattern in front of them that they have a mental model of how this form is going to work, this drop-down box, this search progression that makes it as easy as possible for them to achieve their goal. They're going to perform various actions. them acting on our software or a prototype that the wild thing here is our software or prototype. Then they're going to perceive something happened. So for example, they went in, and they press search, and they're going to get something back. Do they understand that transition? If it's something that's more subtle, do they realize that what they did, that there's some feedback, that they've gone from point A to point B in the journey? Then they're going to interpret this results. We have a usability hypothesis, and we're isolating usability in this clinical setting. This is what we're testing, and this is what we're assessing. This is where we decide, this test gives us a positive or a negative result, yes or no. We ask them, for example, to test this, "Hey, what are you seeing right now?" If they were searching for a part, and a part comes back, we're trying to assess, do they understand that they're seeing the part that they just search for. It's a simplified example, but that's basically what you're doing. Whatever it is that we asked them to try and do, do they understand what happened? Let's say it's an error. Do they understand what the error is and how they should act on it and loop through this process again? This compare step is where in the real world, our user is going to decide like, did I get what I wanted here, or do I need to go to one of the alternatives? Again, we're not worried about this in our usability test because we're controlling for motivation. Let's take a quick look at an individual test item from a usability test that we might run. Now, again, if you join me for a hypothesis-driven development, this is going to look very familiar to you. If you didn't, there course resources where you can reference this and look at a tutorial and examples of how to create such usability test. The great thing is that the research objective for particularly early stage exploratory or assessment testing, the primary research items are just, hey, how are we doing on this user story? Your usability hypothesis really is contained in your individual user stories, which is one of the things I love about user stories, and they're pairing with Agile. So for example, we might say, how are we doing on this story? I know we're going to look at one of the individual child stories from the previous example, where Trent, the technician, knows the part number, and they want to find it and see its pricing and availability. I know the part number, and I want to find out in the system why so I can figure out the next steps on the repair or we might factor this out, is so I can see its pricing and availability. Then this generally is the goal. The trigger is, I need a replacement part. The goal is, I know what to do on my next steps because I know the pricing and availability of this part, like how soon I can get it, how much it'll cost. I can go and talk to the customer, and tell them this. That's in the epic, that's what we're going to assess this. Hey, what would you go tell the customer now? Then we're seeing if they can see if they're actually finding the pricing and availability, and that's at the end of the whole usability test, the entire epic. On this particular step, we have a moderator guide. This is where we're organizing ourselves to think about how are we supplying foreign controlling motivation. So this is where we say, all right, hand them the part and ask them how they would look this up on the prototype that they're seeing. We're going to give them a cheater, which is this printed out version of the prototype of the software where they write it in. If it's a static prototype in, let's say, balsamic, if it's working software, they can just go ahead and type it in. This is where we're dealing with the nuances and the specifics of controlling for motivation, which is worth doing, but takes a little bit of work to do properly in these settings. So a job of the moderator guide is for the goal that's implied in the third clause, are we supplying motivation properly, or are we leaving ambiguity about, hey, do you want to look up this part or what part would you want to look up? That's not we're trying to do. We're trying to control for motivation. Then the output is our ability to assess and assign a positive or negative to this result. So can the subject do this, and do they understand the results of the search process? Another was, do they understand that they're seeing the part that they searched for? So this is how we go and execute qualitative usability testing, where we're able to control for motivation, test usability, and this is a really good way to both iterate on your designs, as well as strengthen your intuition so that as you're observing you're field analytics, where you're just getting the what of user behavior, you have some foundation understanding, some intuition about what might be going on, so that you can iterate it intelligently and iterate purposefully. The reverse is also true. If you're mired in your analytics, and you're like, "Oh my God, I just do not know why the users are doing this, and believe me, I've been there many, many times," then that's a sign that maybe you should unpack and do some qualitative testing of usability or maybe even go out and talk to some subjects. So that is how qualitative usability testing fits into the larger picture of UX analytics.