If doing things with waterfall where we're working in big batches that are sequential, isn't the optimal way to do things in digital product management, and we want to use these agile, these iterative adaptive methods, how does that specifically look and work? It's a good question given how general agile is. Well, the way I like to think about it and measure it and work with teams on it is this product pipeline. Explicitly or implicitly, most teams go from a certain set of priorities to release product that's going out to the users. In-between those steps, they have a design phase where they're going from idea to design. I call this generally continuous design since the idea is that we're iteratively, since we're not doing waterfall isn't this big design upfront thing, it's something we're seeing if we know enough to build a certain thing and if we don't, then we're saying this is something we've got to invest time in. Then we go and we code stuff. This is where people pay a lot of attention, application development and indeed, code has to get written for something cool to happen. We test, we deploy. It's really important that this is automated and standardized so that the team is able to release a lot and that release process and that test process isn't sapping everyone's energy and distracting them. Then we release things, and then hopefully we've instrumented observation, we have an experimental design, we're getting the observations we need to make inferences so that from Week 1 to Week 2, we're not just saying, okay, well, we built what was on the to-do list in Week 1, and now we're just going to move on to the stuff that's in the to-do list on Week 2. We're saying, here's how things worked out, here's whether we're done or not with these things. Now, we're going to figure out, based on the evidence, based on user behavior, what we really should focus on to maximize value for the user and for the company. A good way of looking at this. How are things, are we getting better, or worse than doing these actual things is what these metrics are here. Generally, with continuous design, I would say things are going well or are getting better if the amount of features that you release that are successful, that engagement you want with users, as a ratio of all the features you're releasing are improving. With application development, this just does have a lot to do with velocity. If you've heard that term before, just how much release content are we able to create during a given sprint or iteration of week, let's say., and then finally, how fast can we release? Amazon releases every 11.6 seconds when I was first in the software business almost 25 years ago, releasing once a quarter was pretty good, now that would be terrible incredibly slow. How do you get to releasing faster and making that release process less stressful, generally less overhead for your team? Because it is possible now there's an excellent body of work in this area of continuous delivery. We release, we're observing, and then we're going through to these product priorities. The metrics for hypothesis testing are just basically the same here as the continuous design because we're trying to learn, are we releasing things that matter the user or not? Now, these individual metrics are very good, and I think they're important, they're essential because you got to know whether it's not so much important where you're at exactly, but you do want to know if things are getting better or getting worse in a given area. It's also useful to have a composite metric that looks at where you're investing across these different areas as a team, as a product. For that, I like this equation. It looks like something a business school professor would make up because equations feel credible and that's a fair observation. But I really have found that there's a very practical ways to calculate this and that, well, it's just notional and heuristic for direction. It's pretty helpful for the team to think about, all in all, how are we doing and where should we focus to improve next week? For that, I liked this equation and the basic idea is to ask, how much does it cost us to release a successful feature, feature that at least is as good or better as the metrics we defined or line in the sand for saying, hey, this feature was a success. The basic idea is just to say, well, how much does it cost us to release a feature which is the numerator here, and what portion of those features are successful? Basically, it just the way we calculate this is we have our team cost, which is c money, which is expenses directly related to the team itself. Salaries, loading, things like that. Little g, which is gear, basically is the idea in naming a g anyway. These would be assets that you're going to depreciate. In today's world more likely SaaS fees, AWS fees, things like that, that are directly related to the team's work. Then we have released content. That's f_e, and then it's discounted by the amount of release overhead the team incurs. For example, if the team had time in a week where they had r_f was 0, they didn't have any manual testing , everything is automated, let's say they didn't have any deploying to do, it was either all automated or for some reason it wasn't relevant, they released 100 story points worth of stuff of features. Then on a different week, let's say r_f was 20 percent, well, then we get 80 features. The idea here is just to capture the amount of manual work that has to be done by the team in the continuous delivery section of our pipeline and think about that explicitly in the context of this question of F. Then finally, and this is the whole equation, of course, very sensitive to this, we have s_d. Basically, the idea here is what portion of our release content is successful? You can imagine if this is one versus this is 0.5 or this is 0.25, you're going to get potentially a doubling or quadrupling of the cost released a successful future. A lot of money is actually here, especially in practice in the real world for most of the teams. Basically, this is a good way to think about how all those performance metrics across the pipeline of our features are successful. Are we releasing a lot of stuff, can we release well, are we testing things, how does that all gel together in an economic sense? It may seem a little esoteric. Certainly, if you don't like equations and you don't like these calculations they're not critical to understand the material in the course, just another lens on things that will periodically reference as a way to very specifically consider whether a given thing is important or not and how to assess that because the only thing that reliably works for teams in digital is adaptively seeing what happens and making good, crisp, purposeful decisions on that basis. Just creating a lot of output doesn't work, just doing whatever you heard that Amazon or Google is doing, that doesn't work either because they're many teams there, they're all doing slightly different things, your situation will be different. The only way that works is adaptively looking at what's happening and then figuring out what makes sense for a given team. In terms of the things that teams normally do a lot of this output that you would need to calculate, this is pretty normal, most teams have some notional measurement of f_e which is this how much release content are we generating. More and more Teams are looking at s_d, what is our innovation accounting, our success ratio of how many of these features worked? Then it's pretty easy to calculate big sub f. There is a spreadsheet calculator in the course resources that'll help you map the things that you may be collecting or can easily collect to get a value for that. Here are the other terms laid out on the different metrics that we talked about earlier. Those are some ideas about what it means to be agile and how we figure out if things are getting better or worse. The most important thing is to baseline where you are, the absolute value doesn't really matter a whole lot, but then we'll look at, are things getting better or worse and why are we investing in the right places, are we doing the right things as a team to make that happen? That's how teams succeed in digital.