This week we're going to dive into the usability hypothesis. Now this is probably the thing that you're most used to thinking about of all our hypothesis areas, this is the one that's probably the most frequent. However, is it the most important? I would say no, they're all equal importance and it's really important to heed the signals when you've got weakness in one of the other areas before you go forward and build software. Let's talk a little bit about specifically what that means and how you might identify such an issue. Here we are, usability hypothesis sequentially comes after it's friends that we've been dealing with in the previous lessons. In this continuous design process, we have a really structured view of how they relate to each other which is that the demand hypothesis is our pivot point. Do we have a proposition that's better enough than the current alternatives at doing this particular job where it's investable to go off and actually build something? We want to make sure of that because A, if we're not sure of that or we're wrong then everything we do over here in this right diamond on the solution is going to add up to zero or multiply out to zero how we want to look at it. Also, while you're testing your usability you need to control for in isolate motivation. The reality is teams generally systematically under-invest in testing this because Lean Startup is messy, because an MVP doesn't look and feel and present as real or as tangible as a prototype. I think that's some of the reasons and they systematically over invest in testing usability. If while you're testing usability, you see that you have gaps in your understanding about the value hypothesis or that really that gap is an understanding about the persona or the job to be done. It's important to heed those signals and make time to test these other hypothesis areas. As you think about this and apply this material to your work, it's important to think about this in Agile Terms and iterate focused on the right thing at the right time and make time for what's actually important versus just the gravity of how exciting it is to make software and put it out the door. All that said, what we're going to look at here is, just like we learned with the Lean Startup, the demand hypothesis material. How to isolate motivation and specifically to test for it. We're going to look at how you isolate usability and specifically tests for it. Why? Because if we try to test for these things at the same time, we're going to be using a method, that's probably wrong for one or the other of them. We're not going to get a little bit of both. We're just going to get a bunch of indeterminate junk, that's really just wasteful. So let's look at how this unpacks in a real-world example. We have our friend Trent, the technician. We've gone out talked to him, learned about him. He has this certain job to be done that we're looking at. We know about this alternative and we've used Lean Startup, we've used an MVP to validate this value hypothesis in a way that we feel is satisfactory and investable for moving forward and thinking about how we might build a solution. So what do we do next? Critically, we write good Agile user stories. That good doesn't mean that they're permanent, just like a good persona is part of the conversation. Constantly changing the user stories that you write, should be the same. They should be available to everybody to edit and change. They should be a focal point for your conversations if they're useful if they're doing this job of servicing the usability hypothesis well. This is an example of such a user story. Storyboards, I think really help unpack those user stories. User stories at least in our specialization and in general, often have this format. As a persona, I want to do something so that I can realize a reward. Now, declaring the persona is a good idea for all the reasons you've already learned about. Nobody seems to have generally have trouble with this do something. Software is going to do something. This third clause is often neglected and yet it's really the pivot point, the focus of our usability testing. This idea of a testable reward is something that is really critical to writing good user stories and using hypothesis-driven development with your team to think about outcomes over output. What I mean by reward here or what this generally means with user stories is that we have an element of testability. So for example, if I'm a salesperson and I'm supposed to enter in all the sales calls I go to, rightly or wrongly. I would go to, let's say sales force or my CRM on my phone put in who I just visited and press Save, Submit whatever. The reward would be that I just get clear feedback about, "Hey this is saved you're all set move on with your life" or, "Hey no, there's some problem and this is what it is and how you deal with it." So it's not like winning the lottery reward. That's what we mean by reward here. Having good user stories with testable rewards is the centerpiece of everything that we're going to execute here this week with the usability hypothesis. It is central to the formulation of usability testing and in fact, it's the primary input for the exploratory and assessment usability testing. That's one of the most promising, most useful opportunities for you to test usability well and attested often and make a habit out of it. It is absolutely the centerpiece for your prototyping and it is also the centerpiece of your app analytics. So any field analytics you're going to use to pair with your qualitative testing and so that's what we're going to look at. How we declare nice queer hypotheses with good user stories in this week and then how we carry those forward into qualitative usability testing and how do we pair them with strong analytics out in the field so that we're able to have a quantitative and a qualitative picture that helps us constantly be testing our usability hypothesis and ensuring good usability for our user.