All right, we've talked about creating good inputs and driving good collaboration, and this cyclicality of this job of learning. And really, the punchline there is, there is a limit to how far good inputs will get you. If you think about your work in this bullseye metaphor. We talked about previously how the hallmark of a good team is that they want to get here they want to be specifically right or specifically wrong but they're aiming at the bullseye not just generally trying to hit the target. The hallmark of a low functioning team is that they're trying to be just sort of not wrong about the features they create lather than specifically right. So, if you don't have good narrative, you're probably out here. Sometimes you'll get in here by luck, but you're probably not hitting the bullseye a lot, and if you have a great narrative and great influence, you're probably in this zone. You're getting close. You can get pretty good. But it's only when you create this cycle of really driving to the bullseye and seeing whether you're right or not that you're going to be able to consistently hit the bullseye. And then the other bonus you get is you can discard all the darts that you found out aren't hitting the mark. So there is a limit to good imputs. It has to do with building good software and linking the jobs of proposition design and product design to the job of software development. You learned a lot about that in course one and two, but ultimately it's very cyclical. So it's always important to think about why do we think this is going to be valuable to the user. We are not perfect, we will never know that for sure until the user actually uses the product. And they will use it in weird, crazy ways that we can't possible predict because that is just the nature of things. We've designed observations, whatever those are and hopefully there are specific metrics tied to them but that doesn't always have to be the case. We've designed a way to look at this new feature, every single feature we're doing as an experiment and figure out if it was specifically right or specifically wrong. And we've tightened our implementation to the point where we're going to know that. So let's go back to one of the specific examples we used. We have a screen where a user needs to input something. And let's say if we brainstorm, there's ten things that are sort of plausible that they might want to put on that screen. Well, the low functioning team just goes ahead and puts all ten, because well we don't want the customer to yell at us for missing something. The hallmark of a high functioning team is they say, well look there's these three things that we know absolutely have to be there. It seems like there's no way to do this without those. We have good evidence that they should be there, but so we're going to put them in and then we're going to look at whether we're right about that. And then we're going to instrument observation into whether there are other things that they may have needed. That could be quantitative, that could be sitting with users and watching how they use the product over time. That's their experiment, and then they figure out do we need any of these other seven things to go in here? We'll do that on the next cycle, because we're running nice, tight cycles where we're learning a lot, and we're putting out things that deliver a nice, tight, focused user experience. And they're aggressively learning. Another way of looking at this learning loop is that if you remember we can think about having zero, 30 and 90 days success criteria. So zero day is before we maybe even release this thing and remember that's different than finishing software at the end of an iteration. And you can decide when you want to release your software to the public but at the end of each iteration, you have potentially shippable software. So you can do your user testing after you have this working potentially shippable thing. And then your release decision, how many iterations you bundle into a release and what features you release, that's a second decision that you have to decide. We'll talk about that in the next item on deciding. So a zero day criteria is it usable? We give them a goal, can they achieve it? 30 day is, are they still using it after 30 days? Is it important enough for it to be a habit for them? Or are they just sort of they tried it because we mentioned it and then they don't really care, they're not using it, it's meaningless. And then so these all are kind of cumulative of course. And then after 90 days, or it might take longer in some cases, what outcome is it achieving? So let's say we're making an app for health and wellness. Well, we tell them what the goal is and we see if they can use it on zero day before we even release probably. At 30 days we see okay well people that started using the app on the first of January are they still using it over the course of the month to implement their New Years resolutions. And then 90 days or maybe it's 120 or 100 even 180 are they weller if that was the outcome that was suppose to happen. Are they losing weight? Do they feel better? Are they fitter? Are they exercising more? And how will we know that? Because these two interrelate. Even if they start using it, and making habit out of it, if it doesn't help them achieve what they're supposed to achieve, they'll stop using it. So, all these things play into the job of learning, and the role of inputs as a cyclical thing that you keep coming back to, as you improve your narrative and your understanding of the user through working software that's released and users that are observed.