There are a number of items to review on this slide. Two of them are related and important for you to know. The first is the concept of an error budget. Some companies arbitrarily make a goal of 100 percent like 100 percent up-time but that isn't realistic. We exist in a quantum mechanical universe where random events happen. So, 100 percent isn't really physically possible most of the time. So, at Google, we understand that some amount of time will be spent in error. What we do is identify and manage that time. So, if you have 99 percent up-time, then you have an error budget of one percent. If you find that you've not used all one percent of the error budget during that period, then it's time to do some things that are potentially disruptive. That leads me to a second point and that is about backup and disaster recovery. We often say people don't care about backup, they care about restore. But how do you know that the restore processes are working if you don't try them? If you did a backup last year and you've been consistently generating backups since that time, what are the chances the restore process will work? That's a good thing to do with that extra error budget. It's handy to have a testing checklist in mind to help you consider all options. Consider the questions you're trying to answer with testing. Will the solution support the number of users? Will it handle peak traffic? Is latency acceptable? And so forth. The test environment should resemble production as closely as possible. If you can, test on a part of the production service during a low-use time such as at night. That's called a Dark Launch. If you can't do that, test in pre-production using a synthetic workload that closely resembles a real workload. The results could be misleading if the workload is not designed well. The pricing calculator is very handy for comparing different configurations and identifying cost-effective alternatives. The pricing calculator can be used with BigQuery to estimate the cost of a query before you submit it. The basic advice for optimizing VM cost is use the right size VM and the right resources, customize if necessary. You can use what-if scenarios to see how changing the design can influence cost. For example, which is more cost effective, four machines with eight CPUs or eight machines with four CPUs or 32 machines with one CPU? The GCP console gives you price estimates in the interface when you're configuring the instance. Preemptible VMs can be a great way to scale out. The important thing to remember is that the application has to be designed to handle the loss of any of the preemptible workers at any time. There are committed use discounts and sustained use discounts. Sustained use is when you use the same kind of instances in the same location, and an automatic discount kicks in. Committed use discounts is where you reserve resources and commit to using them in advance at a discounted rate. Discounting algorithms are subject to change. Please see current discounting details in the online documentation. Optimizing disk cost has to do with two factors, size and performance. If you over-allocate disk, you'll be paying for storage capacity that you're not using. It's a much better idea to offload data to cloud storage so you're paying for what you use, rather than holding disk capacity that might not be used. Disk performance can be complicated but there are four factors to consider. The frequency of reads, the size of reads, the frequency of rights, and the size of rights. Generally, smaller more frequent reads and writes are less performance than longer and less frequent ones. Remember that read and write performance are usually not symmetric. Also, consider using a cash if the usage pattern involves a lot of repeated reads. Egress is free, networking cost are similar per GCP product but are build per product. So, you need to view the pricing documentation per product for the details. Example, cloud storage has standard egress cost but there are also separate charges for data migration and for cloud storage operations. Here's an example, egress between regions might be one cent per gigabyte, egress to the internet. The first terabyte to the world destinations might be 12 cents per gigabyte. For the example, I wouldn't expect to know the exact cost of egress from zone to zone or zone to Internet. But I would anticipate the need to know what activities are charged and generally, which actions are more or less expensive than other actions. For example, in a disaster recovery scenario, you'd want to recognize that the improved isolation of storing data in a separate region will be more expensive than just storing the data in a different zone in the same region.