In this demonstration, I'll create a Kubernetes Engine cluster and I'll deploy a load balance service to it. I'll scale the service and we'll see what happens. First, let's use the GCP console to confirm that the APIs we need are enabled. In the Products and Services menu, we scroll down to APIs and Services. We're looking for the Kubernetes Engine API and the Container Registry API. There's the Kubernetes Engine API, it's enabled. There's Container Registry. It's also enabled. Now, we're ready to start a cluster. For this activity, we'll use the command line in the Cloud Shell. For convenience, I'm going to define an environment variable that contains my preferred GCP zone. Now, I'll launch a Kubernetes cluster in that zone. The cluster will have two nodes. Now, the cluster is ready. Let's confirm what version of Kubernetes the cluster is running. It's version 1.8. When you launch a Kubernetes cluster, you may see a newer version. Remember that Kubernetes cluster nodes are Compute Engine virtual machines. Let's go back to the GCP console and view them. In the Products and Services menu, scroll to Compute Engine and click on VM instances. There are cluster nodes. We can also view the cluster in the Kubernetes Engine console. The console reports the cluster name, its location and its size. Now, let's return to Cloud Shell. Let's run a web server in our cluster. We've created a Kubernetes deployment called nginx. The deployment consists of a single pod. Let's confirm that it's running. There's our pod. Now, let's expose the deployment we created so that clients from outside Kubernetes can access it. Now, let's view the new service. It takes a moment for an external IP address to be assigned. Now, an external IP address has been assigned. Let's attempt to visit this IP address using a web browser. We see the nginx home page. Our web server running inside of a Kubernetes deployment is accessible from the Internet. Now, let's scale up our deployment. We would do this if load were rising. We named the deployment and specified the new number of replicas. Now, let's look at the new number of pods. There are additional pods. Now, they're in the running state. Let's also confirm that the external IP address did not change. The external IP address is the same. Let's refresh the home page. We've confirmed that the web server deployment continues to work. In this lab, I created a Kubernetes Engine cluster. I deployed a load balance service to it and we tried the scaling operation, and we saw how seamlessly it worked.