You now know how to zoom out and think about the beginning and the end of your relationship with the customer. We've looked at a good persona and job to be done hypothesis and its relationship to other areas. We're going to look at how you dial in the Analytics so that as you're iterating and as you're creating these hypotheses. But even before you go and start building software and instrumenting observation into it, you have a foundation of the big picture, that's specific, that's measurable. You start thinking about what points of freedom you might want to test to see how to iterate to good value and good interaction with the customer. I think that the first place to start is with your job to be done. If we look at trend here, we've got this job to be done. One is a metric that speaks to this activity and is equally applicable to both the alternative as well as our value proposition. So we're just talking about this job to be done, this problem scenario that we're going to address, I think it might be something like, how many replacement parts does this technician order? So whether he or she is ordering them through the old way of call the office and sorting it all out by phone, or using our hypothetical awesome new tool. Ultimately, the core thing here is how many replacement parts of they ordering? Then as we unpack our proposition, we might want to look at how many parts they order the old way versus the new way. Here is a way of having the big picture but also a specifics in the foreground. This is something if you're using a Google Doc or something like that to look at your personas and jobs to be done, this is something that you might want to sketch out in that general area. There are some templates to do that in the course resources if that's the way you want to go. I think that I've added a series of steps here and they map to the funnels that we've been looking at generally. Your particular focal points may be different and that's okay. There's nothing always right or perfect or super-duper special about these words are faulty instruments, but you do want to have some key focal points, some events in the customer journey that you focus on, and you have to unpack how they go from not knowing exists to happy habitual user. The headings here, what does this mean? What does the interval? What are the metrics? What are the independent variables that we should test? Those I think are a relatively drivel, you may re frame them, but those are the things that I think you should identify, and you're early customer analytics like this where you think about the journey. What's an IV? If you're an English, you're probably pretty familiar with this. IV stands for independent variable and it relates to a dependent variable which is the thing that we measure that is the consequence of the independent variable. So how much I eat is the independent variable and how much I weigh is the dependent variable. How much I eat, how much I exercise are two independent variables that both contribute to the dependent variable of how much I weigh. We'll continue to look at these if you're an analyst or certainly a data scientist, I'm sure you are intimately familiar with these. This is a really obvious but important tool to think about as you're instrumenting Analytics into your Agile cadences. If you're a product manager or product designer, this is a great concept to internalize because as we get into the details, we'll see that how you frame these dependent variables, and help marry them to your overall picture of the customer experience. It is going to bake it a lot easier to do good Analytics. The framing of these independent and dependent variables turns out to be very important to the actual construction and use of goods strong Analytics. All right. So all that said, let's get into the details here. We have this step of acquisition. For if I can hurry this means, how do we ask or require given technician use the tools? So this is an internal software project and therefore, we have the prerogative to mandate or requests that people use this tool, and we might test both of those. Certainly, before we mandate that everybody uses this tool, we hopefully you're going to do a little bit of testing about how it works out for certain cohorts, certain subsets of the technicians. We think that between getting one of these notices to seeing some activity where it's going to take five days for them to find a moment to go and act on this thing. What are the metrics? Well, we might want to test a few different ways of making people aware of the tool and obliging them or not obliging them, or somewhat obliging and to use it. So for example, we don't want to sit down and deliver one on ones, and then mandate or not mandate the use of the tool. We may want to hold classes with them or just little workshops and do the same thing. Or in some cases, we might want to just have a certain manager mandate use of the tool and see what happens. Those are all different acquisition recipes that we can consider to start bringing people in to test the effectiveness of this tool that we're thinking of building. So basically, the independent variables we're going to test are various versions of pulling versus pushing the use of the tool. In this formulation here, I don't think it's important that you identify all the specifics yet necessarily, and I don't think it's even important that you identify all the specific metrics. Sometimes, getting into that level of detail too early can obfuscate what you're actually trying to get out of the framing. Onboarding, is the step of how do we get somebody to get some initial reward from our proposition to this job to be done? So in our case, we think that that's just signing up, because they're going to have to sign up to use this tool on, on our system and then ordering one part which we think will happen over one day. What are the metrics? What we want to look at. How many sign-ups are there where there's greater than zero order? So how many times did somebody sign up and order something versus sign up and then nothing happened? If we see a lot of those nothing happening, we probably want to look at them. What do they have in common? Why are they happening? If we're using nice small cohorts as we probably shouldn't the beginning, we can go talk to those technicians and find out what happened, and what independent variables should we test? Well, we're going to organize some cohorts over here where we onboard them in different ways and we probably just want to observe their behavior in the onboarding step, relative to the way that we either mandated, didn't mandate and help them understand how to use the tool. Engagement is a step that I added here. We didn't have this explicit term in any of the funnels we looked down although, it's pretty similar to the 30-day metric that we looked at for the Enterprise Software projects. What I mean by this in this case is, how do we know if somebody's making a habit out of using this tool or not? So how did they go from trying it out once to either standardizing on it or not? We think that that'll happen within 30 days, and really what we're saying here is when they needed to order apart, they're using the tool or not. That's what we're looking at here when we say engagement. The metrics that we're going to look at this one's relatively specific, which I think is okay in this case. We're looking at, how many parts of this technician order total? Which we have, we're going to assume in one of our T-Systems versus, how many parts of the order online or through our tool? We think that if that's over 80 percent that essentially means that they've standardized on using our tool. What we may want to test here is the addition of reminders versus no reminders. So maybe there's some people that are wobbling and forgotten about the tool, you probably signed up for a SaaS product or internet service of some and gotten reminders like, "hey, we haven't seen you in awhile come use our thing." We'd saved that's effective or not effective, that's something that we might want to test here. Retention. Again is it's catch all for everything we want to have happen with the customer relationship or user relationship afterwards, after this engagement stuff in this case. What we're really looking at is, are we reducing the overhead to complete a job? Are we increasing customer satisfaction? Retention has a lot to do with outcomes. For example, let's say I start using, to continue on this whole fitness metaphor I guess, let's say I start using a Fitbit. Well, onboarding is do I set up my account and activate it, engagement is, am I putting on everyday for n days or let's say 30 days? But then if at 90 days it's not helping me lower my blood pressure, or lose weight, or whatever it is that I'm trying to have happen with my health, then I may stop using it. That leakage in our funnel that we want to plug before we worry about trying to jam more people into the funnel, whether it's an enterprise software and internal thing or a product we're selling to the public. So we think that 90 days is enough time to observe whether their performance actually increases, because they may need a little time to get used to this tool. We probably want to have a few observations about to be able to pare away variations in their work patterns. Then what we're going to look at is hours per jobs on average, how many hours does it take them to complete a given repair versus the baseline, turnaround time per job, which means from when the job got initiated to when it got done, does that interval decrease? Because, hopefully we're using this tool to make them more efficient and make it easier for them to figure out how to close this repair out with the customer. Then finally, customer satisfaction per job, which we might for example, measure by emailing customers after a job's finished. "Hey, on a scale 1-10, how good was this for you?" This is one example of where it's probably okay, I would say you use a survey. Then, what dependent variables should we test? We probably got enough in these prior steps, at least that's the view of the team will say for right now, and we just want to observe for the different cohorts that we've onboarded different ways, maybe done some different things with, which ones get good outcomes? Permanent outcomes? Our retained customers or users versus not. All right. So that was a lot of stuff, but this is something that I think you might find really useful as you go from observations about the customer journey to specific things you want to think about instrumenting and analyzing, and your prioritizing your work for your team.