User stories are going to be a really big part of how you observe where the design process is for your team. Are the user stories vivid, specific and yet do they avoid prescribing the implementation? Are you getting a lot of really great questions back about the user stories? These are things to look for and things to observe about the quality of the user stories. In this video, we're going to look at how we can take some of the work that we've done on right problem on going out and talking to users, and apply that to writing user stories and also apply that to using our user stories as a place where we can observe. Do we really know enough about this? because if we don't, we want to pause, go to just enough research to sort that out and then come back, so that we don't bring software that's not meant to be or not well thought out into the product pipeline, creating waste. Now, you've seen what it might look like this persona and job to be done hypothesis pairing. And then we go over here, we're looking at how do we test whether our proposition is better enough than the alternatives. The big question is, how can we use this to move over here and write better user stories? Since the user story is really kind of about the means by which we deliver on the proposition. The reason this is so important is that these user stories should be relatively specific. Your options in general, are narrowing as you go through this process because you're picking specific users, you're picking specific jobs to be done, you're winnowing the propositions that you're going to put into the product pipeline any further with your demand hypothesis testing. So even while we're considering a bunch of possibilities for some of the items within these areas, overall, we should be getting a lot more specific. And, at the same time our cost is escalating, its very inexpensive to go out and talk to a few subjects. It's more expensive to run an MVP though probably not as expensive in most cases as going in and doing a detailed design, and ultimately a lot of the digital companies are digital teams. Costs are going to be in, product development and coding, software development, testing and deploying it, keeping it healthy. And so we literally want to make sure that as we go through this, we're A, doing good work and B, detecting when we have a question that we really ought to go answer. Kind of a counter example of where we don't want to be is writing user stories like this. I call the red button story, shopper, I want to click a red button so I complete my purchase. Well, clearly this is something that happens all the time. People are busy. They just slam these into or whatever system they're using and we did the design part. Well, obviously this user story doesn't say much other than the buttons should be red. And even if that is worth saying, it's probably better said with a prototype or a sketch or just a simple note. Our objective with these user stories is that we're getting at an underlying job to be done with a specific solution where we can test whether or not that's working and we've zeroed in on what's really important and prioritized those things. With these techniques, you can avoid a couple of failure modes I see quite a lot on the one end of the spectrum is just doing what your user says very literally, and assuming that's user centric, it's not, they don't know how to design the product for you. They have their own job and perspectives. And other end of the spectrum is just saying, hey, well, we're the experts, we know what's best. We'll build it and they'll like it. Both those are wrong. Unfortunately, the real right answer is to go out and observe what's going on with users because they can tell you what's on their A list, test the propositions which we'll learn about later on in the specialization or you can read the resources on lean startup. And then observe how well they can use the product with these user stories as kind of our anchor for testabilities. We want to watch what they do, not just what they say when it comes to specific usability. We looked at the relationship between personas, jobs to be done and alternatives where we're making sure that we've got a real alternative out there that we can kind of factor out and test our job to be done against. Another good thing to do with these jobs to be done is to kind of think about are they too big an abstract or are they too small and overly specific? More of a feature rather than a product or part of a product that team is going to work on. This one is probably about the right size, I would say for enable quiz. And I think the interesting question is, what would be the parent of that and it probably would be, so let's take a look at this first. This is probably about right. We know what the alternatives are and we know what those are in this case for both the personas involved in this transaction, Hector, the HR manager, and Francine the hiring manager. The parent of this job to be done would probably be something like just hiring technical talent in general. And it's good to know that but for the startup that's probably too big a topic to go after two expansive in general job to be done. And if we look at individual jobs to be done that we know are kind of part of this screening technical talent, the HR manager is preparing a new quiz and they want to test it, they want to exchange notes. Well these are all things that you wouldn't build this as its own standalone product. They're more kind of child related jobs to be done. Really kind of more means to an end of getting at this job to be done that the enable quiz team is focused on. Here's an example for our other example company HVAC In a Hurry. The team right now is focused on helping technicians complete HVAC repairs along with the dispatchers executing HVAC service contracts is kind of the whole business of the company but that could mean a lot of different things and it's probably too big for an individual product team. Down here we have things, they exist, but these are probably too specific for even a team charter for say an individual agile team, developing some internal software. And so what you want to get in the habit of doing is we all go to each other and we say, hey you've been using the app or we're working on this. Wouldn't it be cool if we did x because they implicitly do have probably some kind of idea about why it might be good, who it might be good for. And the important thing isn't to kind of just give an answer like yes it would or no, it wouldn't, but to ask, hey, let's unpack that. What problem would we be solving for who? How would we know if that's working out for them? And this way you can do two things. If the person is a good idea, you can get it all textured out. If the person has an idea that maybe isn't quite the right idea right now, then you can help them think about maybe why that isn't a good idea rather than just telling them for example, no, we don't want to do that. So that's a way to both find out what really matters to the customer as well as make sure that your user stories are A, good and B, kind of a signal to you when you need to retread and find out what really matters to the customer to avoid waste.