Now on to our second environment, Google Kubernetes Engine. Google Kubernetes Engine or GA, is a hybrid service product designed to run containerized applications, will start the section by introducing docker containers and the kubernetes container orchestration system. A concise definition of what a container is, is that it provides lightweight application virtualization. We're familiar with the virtual machines, which virtualizes the computer, the operating system providing CPU, memory disk, and networking resources to the operating system. A rest level virtualization is resource intensive. It will be on to advocate multiple calls, gigabytes or RAM, and disk space to run a single virtual machine. If we installed every part of our application onto a single virtual machine, we could potentially run into the problem called works on my machine, where each parts dependencies are uncontrolled. Containers are a response to the desire in modern application development, to keep each component part for the application in a controlled, and isolated environment. With containers, each piece of software is packaged up into a standalone image, but it's not a full operating system. Only the libraries and settings needed to ensure the application can run are included. So it's easy to ensure lightweight, self contained, and reproducible deployment for your application. The most popular software for containerization is darker, although it's not the only format. With containers, it's easy to package software for deployment, we also need to deploy and run our containerized applications. Kubernetes, is an open source container orchestration system created by google, it enables you to manage containerized applications. An in depth discussion of Kubernetes is outside the scope of this class, but just be aware that you'll be able to exploit the same types of features that we've seen for creating resilient workloads on google cloud, for example, Kubernetes supports auto scaling and load balancing, using a declarative format. Google Cloud contains several resources to support containerized software. Firstly, GKE enables you to create a cluster to run your Kubernetes applications. Secondly, container registry gives you secure location to store your docker images, that will be deployed to the GKE cluster. So what does that mean for ASP.NET Core applications? Well, first we need to publish our application and we do this with the dotnet command line tools publish command. This builds an output DLA file, that's used as an entry point for our application. Next, we can turn our attention to packaging up our application, as a docker image. We do this by writing a Docker file, which is a specification that enables Docker to build the image. This slide shows the entire contents of the Docker file. Microsoft maintained an image for dotnet core, which we use as a starting point for ASP Core web applications. Recall that we said in a containerized environment, the image contains only the libraries and the configuration needed to run the application. So starting with the Microsoft darkness base image, we copy over the files from the publishing stock, into our new application image. The next statements configure the network environment. Remember by default ASP.NET Core Applications run on Port 5000. We could configure GKE to map this port, to make it available. But later on we'll be using this Docker image on app engine. An app engine requires that our web application, should listen for requests on Port 80 80. So we exposed port 80 80 from the container, and then set the environment variable to ASP.NET Core users, to configure its protocol IP address on port. The final statement is the command to run when the container starts. Notice how we're running the dot net command, and providing the DLL generated from the published step, as the argument to the command. This will start the web server. Now we have our Docker file, we'll use it to build our Docker image. Remember that the Docker image, is a complete isolated application, that can be executed on a container runtime such as Dockers runtime, or google Kubernetes engine. This slide contains a complete set of commands to run on the command line to create a Docker image, store container registry, and then run the application on GKE. The first statement builds the Docker Image, using the Docker command line utility. The dash t switch provides a tag for the image, and then the format must follow the specifications on the slide with gcr.io slash the project id, slash, then your choice of application name. The command on the slide contains a placeholder for your project id, so you need to replace. It also, it's really easy to miss the final period, which means the current folder. This command is running in the folder that contains the Docker file we read previously. Next, we can use the gcloud Docker command, to push the image to the container registry. Again, you must use the same format string you used with the Docker build command. Finally, we used to Kube control command line utility, to deploy our application. There are lots of ways of doing this, and we've picked a simple one, as we only have one container to deploy to our cluster. Using Kube control run, we specify our choice of application name, the image to use, making use of the format string, the image of the registry, and the port number where the application is listening for requests. A containerized application is deployed, running, and listening for requests. However, we need to expose the deployment to allow clients to connect to it. We do this with a final Kube control command. You can see that with Kube control exposed deployment, we can easily set up a load balancer, and it tells that requests from the clients on port 80 should be routed through the containerized application on Port 80 80.