In this lecture, we're going to talk a little bit more about METS and how you can use it for time budgeting. So most of the time when we're testing an application, we're not testing it from scratch. We're testing it because we've added some new feature or made some bug fix to a previously existing feature. And what we find when we add a feature is that no feature is an island. Each feature we add to a website actually effects other features within the website. Just as a hypothetical example, suppose that Coursera is adding degree programs so you can actually get a Master's degree from Coursera instead of just taking individual courses. And this is something that they actually offer. Well, by adding degree programs, we have to change how search works, because now a user should be able to search for different degree programs as well as other things within the system. And also, we have to change how the user management works, because now we have to track for each user what degree programs they're part of, and actually what degrees they've earned. So by adding a degree program, we have to change how different aspects of the website work. And what we want to think about is how much of our testing budget we want to spend on the things that are directly applicable to that feature and also to the proximate effects. So looking at what parts of the remainder of the website we also need to test in relation to this new feature. So we're going to split the testing effort into three buckets. So we have direct testing, which is testing the feature that was changed or fixed. So in our case, by adding a degree program, there are a bunch of test aspects that are directly related to that degree program. Defining it, modifying it, determining whether or not a course is part or not part of a degree program. And so we need to direct a lot of our testing budget towards that feature itself. But we also have to direct testing effort towards proximate features. So we have a search feature on our Coursera page. Now we have to look at how search works in relation to this degree program feature that we've added. And then finally, we may not completely understand how this new feature is going to interact with the entirety of the website. So there's also a regression bucket, which is just tests that we want to run every time we do a release. So, whatever we do with degree programs, we don't want to break the ability of students to sign up for a course, or we will be running out of money very quickly. So, this is something, a capability of the website that's always critical, that we always want to test every time we make a change. So in METS, they have a way of splitting up your time budget. First, this is something that you adjust. So you kind of start out, you could start out with the METS, the estimates that they provide, where you spend 25% of your testing effort looking at direct testing, 50% at related testing and 25% in regression testing. But the idea is that based on your experience, you're going to adjust these numbers so that you can maximize the effectiveness of your testing process. And then what you're going to do is you're going to extend those METS grids to have sort of a description for this current feature of whether this aspect of the system is a direct testing aspect, a related testing aspect, or something that we always want to do as part of regression. And then you can assign a time budget to each of those activities that you're performing for testing. And you can get an idea of how much time you can spend testing your system based on this new release. The idea is that you kind of walk through the features and their criticality, and you determine which things you have time to test. And that way, you test the most critical things first. So just to recap, METS is a technique for organizing your testing process. It's very lightweight, there's not too much to it, the key idea is by using checklists and grids, we make sure that none of the critical testing obligations are missed. And it's not a testing tool. It's not automation. The idea really is that we're just talking about what the test targets are. And the way METS is structured, the expectation is some of the testing is substantially manual. So when you're looking at the physical layout of the page, that's something right now that's very difficult to automate. But it covers both functionality and presentation and it gives you this handy checklist. The physical testing process has to do with the presentation and low-level user interactions, and the functional testing has to do with the features that the business actually cares about. And once you've figured out these grids, there's a simple approach for test-time budgeting that allows you to do triage.