A team that's running a great UX Analytics Program is usually a team that's making a habit out of thinking about user experience all the time. For example, they have user stories that aren't static assets, they're constantly a part of the conversation, they're getting edited all the time, and they're very active, and they are engaged in that. Now let's take a look at what that looks like over the larger process from customer analytics to demand analytics to UX analytics. A team that's running a strong program will have nice clear hypotheses about who their user or users are with personas. When they feel like they don't know who they're talking to, building for, watching with their analytics, they make time, they run a design sprint, they go out, and they talk to those people. If you don't do that periodically, your material is going to get stale. There's no such thing as a permanent persona, likewise with these jobs to be done, scenarios and alternatives, habits change, but these alternatives will change faster than the jobs to be done properly factored. But the point is it's important to keep this material fresh. You'll have new people coming on to the team, it's important to get them out there and make sure they have exposure to the customer. Without that, it's really hard to do good work here and have empathy and interest in your user. Finally, we have our value propositions. Everybody's got an idea about what's going to be valuable, but if they don't link it to this preceding material and they don't have a testable idea, a demand hypothesis that they've tested, then the very likely to over-invest in software prematurely and end up with results that add up to no value for the user, no money for the company, and a demoralized team, and you don't want that. Once we validate all this, then we're looking at our UX questions. So a good set of user stories really is the centerpiece of this work. So strong user-stories are well-written, they have all three clauses, they have testability, and they are small. If you join me for Course 1, Agile Meets Design Thinking, we talked a lot about small user stories. Since we're in an analytics course, I will use a ratio. So let's say, we've got a ratio of stories to lines of code, we want this to be high. The reason is that if you have big stories, then you're creating more space for your team, your designers, your developers to build software. There's no narrative, there's no testability, no usability hypothesis as created in the third clause of the user story. That's bad because you want them asking you questions. You want them thinking about the why of what they're doing rather than just cranking out a lot of output. That's not a bad developer or a bad designer does that, we're all like that. If you're acting in those capacities, you're doing that. We're all doing that, I do it. I mean you have to dial in and attend to these details and it's really hard without a good experience design around this stuff. It's really hard to have everybody pivot at the right times between the little picture and the big picture. So a strong team that's running a strong analytics and a strong design program is going to have great, small, user stories, even better than that, they're going to have analytics paired with those. Not once they go into code, not once they release code and think about what they want to look after the fact, but as their concepting their work as they're designing things. So for example, we have this story about Trent the technician. This is the epic, this is the first two child stories. What I'd like to do is pose the dumb questions with the analytics before it worry about specific metrics. For example, here it might be something like, of these three search types, how often is one search used per transaction relative to the alternatives? For example, maybe one of these isn't being used very much and we want to get rid of it, or we want to change it because we think it should be important, but it isn't. Then probably more pertinent or more immediately important is how often does this search lead to a part order? In other words, do the users try one type of search but end up on another type of search frequently, which is the one that actually moves them forward to an order? If so that's something we ought to look at. You might think, "Oh, why not just go into the metrics, go to the hoop?" It's reasonable question. What I find is that even with people that are great analysts, great developers, what they'll do in these situations is just start dumping in a long list of metrics, like it's a wheelbarrow full of apples and more as more. I think there's reasons for that, we all like to just go to the solution and start building stuff and the idea maybe is like, with 20 metrics, how can we not get a good answer? Really easy to not get a good answer. Good metrics are a lot about quality focus and getting the data that you want to have and making sure it's instrumented into your code rather than just reacting to what happens to be available after the fact. That's a really frequent failure mode for Agile Analytics, Lean Analytics, using analytics in a purposeful way. So I think it's good to pair these questions with your user stories as you go along. Then here are the metrics that we might want to get to answer these questions. Searches in this type relative to other searches, sequence of this search relative to other search types, meaning that, does one search type tend to get used after the others? Then how do these convert to order from a given type of search? So are we seeing, people try to search this way but then go search this way and this one is leading to conversions. So I would just take a moment and think about how this might work or even try this out for one of the user stories you're working on. So let's just go to the end of this epic and look at another example. We have this last child story, I want to see the pricing and availability of parts so I can decide on my next steps and get agreement from the customer. Well, how well do tech's that do this perform relative to others? Because maybe the team is going to roll out a version of this where the tech doesn't actually place the order yet. If they can place the order with the software, we probably instead be asking, how many of these lead to an actual making the order? What we would want to look at is, in the future conversion may turn order, but also customer satisfaction per job in a cohort of tech's that are using this versus the baseline. So this is probably more of a summary metric. Then also this is kind of, is this making the customer and customer experience better that people are getting their HVAC fixed. This last metric is billable hours for tech's in this cohort versus the baseline, billable hours per week. We would expect that to go up in the cohort of users that are actually using this because they're spending less time with the overhead of trying to get part orders done or get pricing and availability and they're able to spend more time just getting the job done. So those are some ideas about how to do this. I would encourage you to try pairing your user stories with just general questions about the analytics, the questions that you want to answer. Then just make a list of analytics. Also, another pro tip, don't worry about whether those specific metrics are available or not. Bring those into your conversation with your analyst or your coder, whoever is going to get these metrics and you may have to adjust them or you may have to interpolate between two things to get to the right answer. But start from what you want to have and then be ready to iterate and effectuate your way to the answers you want. It is always a mistake, I think in my experience, to just react to the analytics that are out there. I mean you should always do the best what you have, but don't be afraid to try and make your analytics better, they're really important. In a good UX Analytics Program, you're going to be constantly iterating on them and improving them.