Hello and welcome to this demo on setting up cluster access. In the previous demo, we saw how to create an okay cluster using the great workflow. Now in this demo, in order to access the cluster using the kubectl, we will have to set up a Kubernetes configuration file known as Kubeconfig. Having set up the Kubeconfig file, we will be able to use the kubectl to manage the cluster. As we studied from the lessons that you can use two methods in order to set up access to the cluster. One is either using the Cloud Shell and the other one is using your local CLI. Let's get started. I've logged in into my OCI console. Remember when a cluster's Kubernetes API endpoint has a public IP address, you can access the cluster in Cloud Shell by setting up a Kubeconfig file. But if your Kubernetes API endpoints are private and you wish to access them using Cloud Shell, in that case, you need to configure a bastion using directive Cloud infrastructure bastion service. Let's go ahead and set up the Kubeconfig file using Cloud Shell. Let us first go to our Kubernetes cluster dashboard. I'll click on the hamburger menu and let's go to the "Developer Services". Click on the "Kubernetes Clusters". Make sure you are in the right compartment where your cluster is created. Click on the cluster name. The cluster page shows you the detail of the cluster. This is the same cluster that we created in our previous demo. Now you can click on the "Access Cluster" button to display the access your cluster dialog box. As you can see, you have two options to access your cluster. One is the Cloud Shell access and the other is your local access. For now, we will be going ahead with the Cloud Shell access. Click on "Launch Cloud Shell" button to launch your Cloud Shell window. I have the terminal access of my Cloud Shell. Let me quickly clear the screen. Now, I simply need to copy this command and paste it in the Cloud Shell terminal. As we can understand from the command, this command is used to create an entry in our Kubeconfig file where we are specifying the OCID of the cluster that we just created. Also, the region in which this cluster is created, and it is telling the Kubeconfig file to create an entry for our public API endpoint for the Kubernetes cluster. The default location for your Kubeconfig file is usually dot cube directory within your home directory, and the Kubeconfig file name is config. Just to note that default Kubeconfig file already exists in the location and you are setting up a cluster access for a new cluster. The details of this cluster will be added as a new context in the existing Kubeconfig file. The current context element in the Kubeconfig file will be set to point to the newly added context. Let me go ahead and hit "Enter". For me, the Kubeconfig file already existed, so it added a new config and merge it with the existing context. Now as I've merged this new config into my existing config file, in the current context is pointing to this cluster. Let me quickly verify that kubectl can access the cluster. In order to do so, I will use the kubectl command to interact with my cluster. Let me check the nodes on the cluster using the kubectl get nodes. As you can see, my cluster is running three nodes within the node pool and all the three nodes and be seen over here. With this, we confirm that the cluster access has been set up using the Cloud Shell. Now I can use the kubectl commands to manage deployment services and nodes and other components of the Kubernetes cluster from the command line interface. Now that we're done with setting up the Cloud Shell access to access our cluster, we'll go ahead and set up a local access to our cluster. In order to do so in the interest of time, I've already created a compute instance which I'll be using as my developer machine in order to set up a local access. Let me quickly take you to the compute instances. I've created a machine named Developer-Mahendra, which I'll be remotely accessing and we'll be using to set up the local access to cluster. Let's move into the SSH session of my developer machine. To remind you that to configure a local access, you need to first generate an API signing keypad. Then you need to upload the public key of your APIs signing keypad into your OCI account, and then you have to install and configure the Oracle Cloud Infrastructure CLI. In the interest of time, I've done all of these steps. You can always refer to Oracle documentation to perform the same. Also, you need to install the kubectl tool within your developer machine. Let's go ahead and install the kubectl tool in our developer machine. Let me check if kubectl is configured on the machine or not. You can do that using kubectl- v command, and it says command not found. Which means we'll have to install the kubectl version in order to interact with our Kubernetes cluster. But before we go on to install the kubectl, let me tell you that the kubectl version must always be within one minor version difference of your cluster. As you can see, our Kubernetes version is version 1.21.5, which means our kubectl version can be anywhere between 1.20 to 1.22. Let's go ahead and install the kubectl tool on our local machine. To install the kubectl tool, we can run Local command. To firstly, download the kubectl binaries. As you can see from the URL, I'm downloading the version 1.21.0. Let me hit "Enter". The kubectl binaries are downloaded. Now to install the kubectl tool, we will run this command. kubectl has been installed. Let me go and check if the kubectl is installed correctly. As you can see, the kubectl has been installed. On our cluster page, I'll again select the "Access Cluster" button, and this time we'll be using the local access option. Let's follow the steps given in this dialogue box to set up the cluster access. First, I will ensure that the OCI command line is set up correctly by using the OCI minus V command to check the version. Let me copy this, so the OCI minus V command displays an output that is the version of the OCI CLI. The next step is to create a directory to contain the Kubeconfig file. Let me just copy this command and run it on my local machine. The directory is created. The next command is to access the Kubeconfig for the cluster via VCN-native public endpoint. Since we have created a public endpoint, we will be using this command. If you're using a private endpoint, you will go for the other command. Let's copy this command and paste it over here. Hit "Enter". As it says, a new config has been written into the Kubeconfig file at this location. Now we need to set the Kubeconfig environment variable to point to the file for this cluster, so we'll use the export command and run it on the terminal. That's it. We're done with configuring local access to our cluster. Let's quickly check whether the access has been set up properly or not, using the kubectl command. To do that, I'll use the kubectl get nodes command. There you go. We can see the nodes that are available on our cluster. The three local nodes are displayed on the screen. This is similar to the one that we saw in the Cloud Shell. With this we come to the end of this demo on setting up cluster access. In this demo we saw how to create access to your cluster using Cloud Shell and local machines. I hope you like the demo. The next demo we will see how to deploy an application to the resources that we have created so far. See you in that demo. Thanks for watching.