So let's talk about modern solutions. We talked a lot about all the obstacles or the wishlists. Let's talk about how we can solve these problems from a high level. First of all, we have the Compute layer. How do we compute our workloads? This is where we really utilize Kubernetes and Containers which became very very popular. Hopefully, all of you are familiar with Containers and Kubernetes, and we basically have Kubernetes Engine, GKE, which is a very mature product that has been generally available since August 15th. So it's been GA for a long time now, and Google has been running Kubernetes or Containers for long time privately inside of our data centers. So we have a lot of experience in doing this. Google Kubernetes Engine basically provides you in the cloud, the abstraction that you need to use Kubernetes as a service. So you don't have to manage the master, you don't have to think about the underlying infrastructure. You have very complex and very interesting features, like you can have high availability regional clusters, you can have autoscaling, we can provide you with more virtual machines if you need them. There's a lot of really cool things that happens with GKE in the cloud, and you can extend that compute layer into your On-Premise now with GKE On-Prem. So you can have the same environment consistency across the different places. You can have a dashboard that you can manage both of these environments in the same way from one centralized location. There's a lot of benefits of using that. The orchestrator. Basically, we are talking about here the orchestrator rather than containers. We talk about what managers and automates our environment that are running containers on top of it. So the developer will then create containers and maybe configure a few YAML files, and the environment here is what we're really concerned with, and that is GKE and GKE On-Prem, works together when next to each other, across a connection from your On-Premise to the cloud. The other solution is a service management. So a service mesh. When you created solution on top of Kubernetes, we like to think of it as a service centric architecture. We think about a service, and we think about the pods that are part of that service, or the deployment, we have replica sets, we have [inaudible] , we have canary, we have a lot of things that are centralized across the service. Therefore, once you have a lot of services, you need to find a way to manage these services. You have to find a way to secure the connection between them, identity, telemetry, a lot of things that you have to think about now when you have a service-oriented infrastructure, that is a by-product almost of using Kubernetes, and that is what Istio is here to help us do. Istio is an open source technology for a service mesh, and you can basically run the same Istio open source, or you can use the add-on which just help you get started with it quicker in the cloud, and they can communicate and collaborate across that environment, which is really really nice. You can basically have a self-managed open source version of Istio, and you can create basically sinless interoperability between them, across the Internet, or across a private connection even, if you choose to do so. The next bit is that, if you have more than one cluster. A cluster is a sovereign unit in Kubernetes. This is where you have a bit of a boundary when you're an admin. You have a cluster, inside of that cluster, you have name spaces which are logical, but at the end of the day, you have the sovereign unit that you have with Kubernetes is the cluster. You can have multiple clusters, you can have multiple GKE cluster in the same project, you can have multiple projects in GKE, or you can have multiple locations. So you can have one project, and you can have "an Kubernetes" which is by far my favorite term from Kubernetes, came out so far. So you can have a lot of clusters around the world. They can be as close as possible to the users. But they're again clusters, and they are sovereign, they don't really collaborate unless you use some other higher level of abstractions. Therefore, you need to find a way that you can enforce policy on them, and you can make sure that the configuration is in sync across all of them. So this is another problem that you have to solve, especially when it comes to On-Premise and the cloud. So you want to make sure that your configuration is in sync and there are policies that are enforced, and there are basically resource management and consistency across them. Lastly, we have observability. Observability is something that we want to make sure that is consistent across all of our environment, hopefully with a single pane of glass. So you can see all of your environment in one place, you can have correlations. How does that workload work on my On-Premise versus in the cloud? How can I shift the traffic, optimize my workloads etc? That is something that you can do with a centralized observability mechanism.