Now that we have an understanding of what characteristics we care about for a reliable service, how exactly do we measure it? Well,unfortunately, we won't be pulling out the protractor and ruler for this exercise, but the same concept still applies. If you want to know how long something is, we look at how many centimeters or kilometers. So for services, let's dig a little deeper into this with an example. Let's take Netflix. What are some characteristics it has when we consider the service working or good enough? How about once you select a title you want to watch, the time it takes for it to start playing should be fairly quick. Or if you're already watching a movie, it should be uninterrupted and have no issues with playback. So what metrics do you think we could use for the first point? The time it takes for you to select a title, and the time it takes to start playing. We want those to be quick. For this, we can measure the request latency, which is measuring how long it takes for a request to play back or a return response. We refer to this metric and other metrics that measure the level of service provided as Service Level Indicators, or SLIs. SLIs, like request latency, are a quantitative measurement or metric of a user experience. What about for the second one? When you're watching a movie, it should have no issues with playback. Well, there's a few SLIs that you can use in addition to request latency. You could measure the error rate, which is the ratio of errors or successes over the total number of requests. Or the amount of data that is transmitted per second, or the throughput. Whichever one you choose you should think about the tradeoffs between different ways of measuring a specific metric. Let's take latency as an example. You could choose to measure the latency as the time to first byte at the application server, or you could measure the time to playback, as seen on the client. When you're choosing a way to measure your SLI, think of the pros and cons of each approach. For example, perhaps you're already exporting the data for your SLI to your monitoring system, which is a big plus. But maybe it's not saving history far back enough, or you've come with a perfect measure of user experience but actually implementing that measurement is way too complex. These tradeoffs are important to account for. Later in this course, we'll talk about some common ways to measure SLIs, but for now, let's just keep that in mind. Finally, the last thing we want to mention about SLIs is that they are best expressed as a proportion of all valid events that were good. For example, the proportion of requests served successfully or the proportion of requests served within x milliseconds. Which brings me to our next topic, so how do you set SLOs for your SLIs? Well, an SLO is just a target that you get to pick, so once you've decided on that target, you measure the performance of the SLIs against it over a period of time. Such as 28 days, last quarter, etc. Depending on what our target SLO is, our SLI will instantly tell us whether or not a certain point in time was good or bad. For example, we could say our target SLO was 99% of requests will be served within 300 milliseconds in the last 4 weeks. Then when we measure our SLI, we see that only 95% of requests were served within 300 milliseconds in the past 4 weeks. Thereby missing our target SLO. We'll talk more about what makes up a good SLI and how to set target SLOs in the following lessons. But hopefully you have a better understanding of using metrics to measure your service for reliability.