Let's do a little bit of a recap on the Lean Startup. We learned about this in course two and here we're going to go into much more depth about how to instrument experiments. You know that your practice of Lean Startup is working when the outcomes that you're trying to achieve with your product and promotion, are tightly linked to testable assumptions or hypothesis. And we talked about how really everything that you're doing in the processes that we've talked about, you treat as a hypothesis. Your persona's, your problem scenarios, these are all just working assumptions that you're continually improving. And most principally in the practice of Lean Startup, we're working on this value hypothesis, this idea, generally speaking, that we have something that is better enough than the alternatives at solving our users' problem, that they're going to want to use it, buy it, whatever we're trying to achieve with them. And you may remember that essentially with the Lean Startup, we're just applying the scientific method to our practice of product design and product development. We start with an idea, we tease out hypotheses and experimental designs, we execute those and then we try to reach a conclusion. Where we have the process now of saying, ok look, you know we minimize the expenditure on this but it was wrong and we need to try something else or this looks good. We can have the confidence that this is investable for scaling this up a little bit. We talked about how minimum viable products are kind of a vehicle that we use to execute these experiments. They are ideally not products at all, but rather product proxies where we are testing the value proposition with a minimum of time and expense. And then of course, eventually we do build actual product and we're testing that. So there's kind of before the fact where we're kind of learning why, and generally if this stuff is valuable. And then we're continuing to focus on what, what's actually happening in the field, with our product, because we're always creating this loop of experimentation. We'll talk more about that and some work about LeanUX if we're more inclined as we continue to go through this. This difference between asking why and what. When we actually execute this, we've said that it's a loop, it's continually happening, but for any given topic, you've got to start someplace. And in this loop I've been showing you, we kind of implied, I guess, that hypothesize is at the top, and I think that that's good because it pushes you to ask what you're really focused on. But then, as you learn how to practice this stuff, you've got to ask yourself where you are. So do you need to go learn more before you can even get here? A lot of the time that's the case and that's fine. Go out, talk to customers, find out what's going on with them, ask why. Don't go build something if you don't feel like you understand the user well enough to make it plausible that it's even worth running this experiment. Because experiments are expensive, they still take work to set up. And in fact this is often a point of confusion, that the traditional Lean Startup way of looking at this, you probably heard of the build, measure, learn loop. And so this is the way that this sort of Lean Startup process is often presented. And so here it's kind of implied that you should build first, but in fact most practitioners will tell you, no you should really start the loop here at the learn step. So one of the things you'll learn as we go through this in more depth and then you enhance your own practice of this is, where are we in this loop for a given assumption or experiment. So think about that as you go and you approach your own particular problems with this. So, I mentioned how we're trying to create a culture of experimentation here, so we're trying to use this technique not just for really big ideas and really big questions, but also for tactical things that are nevertheless very important. And will over time will occupy most of our hours and investments that we make in the product. Let's look at a really small example for HVAC in a Hurry. So what was the product and why were they working on this? Why did they think it made sense? Let's talk about this facet of the results page for when the technicians are searching for a part, they see a set of results. If you remember, we had this idea that it might be a good idea to sort the results by the amount of times things were ordered. So for example, here you saw that this one's ordered 87, 12, 2, so they're ordered in descending order. And the idea, is that the one that's ordered the most is most likely to be the right result. That was to deliver on these narratives here. Okay so why did we build this again, and how will we know if that was really a good idea or not? Well, we had the assumption that, if we order these things, and we generally create this feature, then, the HVAC technicians will use it, and, it will increase their performance. That was sort of our overall hypothesis and assumption in this very general area. And then, let's look at how we got to that. Well, the HVAC in a Hurry team started out, and they had this very general envelope that they were operating within of let's build an app, let's invest in software to increase the performance of the business. And looking around they decided that since it's really the technicians out in the field that are driving most of the revenue for the company and their advice from Vince who runs that part of the business, was that they struggle a lot out there in the field with the support that they get. They decided to start with the technicians. And then if you remember they had this idea that, well we should create a way for them to get parts documentation. But when they went out and they asked the technicians what's on your A list, what's hard? They found out that, that wasn't on their A list at all, that in fact it was really or one of the most important things was just finding out the pricing and the availability for a part, while they're out there in the field with a customer. So that they know what their next step is and they can decide that and inform the customer. And so then, they created this assumption that I mentioned a minute ago, that if we build something where the technicians can look up parts availability and pricing and then they'll use it and find it useful and it will increase their performance. And so they created the product proxy, the concierge MVPs that we talked about earlier, and then they loop through and we created user stories and prototypes which we just saw to test this. And so how do they structure their experiments and assumptions? Well, if you remember, they had a zero day assumption so was about usability. They created an interactive prototype and they went through and they tested it with the technicians, anchored against those user stories. So that was how they started before they built or certainly released anything. And then on the 30 day basis, if you remember they had the 30 day metric, their metric was, well we want to make sure we've instrumented into at least the logs where we can, even if it's a manual process, we can look at after the fact, are they using it? So here they're asking what happened once we release this? And one of our most primary questions is are they using it or not? Because it they're not using it, obviously all the other downstream stuff about how it might effect the business or how it might change the life of the technicians its all mute. It's just not on their radar for some reason and we need to loop back before this and ask why. And then on a 90 day basis, they asked, well are they still using it, so this is still relevant here, and then are we increasing their performance? So are their billable hours per week going up? This is something that they knew the could track in the system and look at different cohorts of technicians that were using this versus not using this. And jobs per week that they were able to accomplish was another good metric to kind of look at outcome. So they move through usability questions to relevance questions of just, are they motivated enough to use, is it usable enough to be used, and let's just look at what happens on that too. Looking at, even if they are using it, is it actually achieving the overall outcomes that we thought were important, because that's really our sort of primary envelope that we're operating within. It's not just sort the order of products and create software but it's to increase the performance of the firm. So that's an example of how we apply Lean Startup on a very tactical basis to a relatively small piece of functionality and features. And how we kind of attach something like, gee should we sort it by relevance to the larger picture of our experimentation? And you've also recapped and looked at what the practice of Lean Startup is and how it really works. And if you want a refresher on some of those fundamentals, you can go back and find that material in course two, the module on motivation springs. So now we're going to move on, we're going to talk about how to actually create experiments and apply the stuff to different situations.