Hello and welcome to this lesson on Prerequisite to Create an OKE Cluster. My name is Mahendra Mehra and I am a senior training lead and evangelist with Oracle University. Let's have a look at the prerequisites to create an OKE cluster. The first prerequisite is that you should have an access to Oracle Cloud Infrastructure tenancy. The tenancy must be subscribed to one or more regions in which the container engine for Kubernetes is available. The next prerequisite is to ensure that you have sufficient quota on the resources required to create your OKE cluster. Those resources include compute instance quota, block volume quota and load balancer quota based on the type of cluster you wish to create. Within your tenancy there must already be a compartment to contain necessary network resources such as VCN, subnets, internet gateway, route table and security list. Configuring network resources. Within the compartment that you've created, you must ensure that all the network resources which I just mentioned must be appropriately configured in each region in which you want to create and deploy the clusters. To create and manage clusters, you must either belong to the tendencies administrator group or to a group to which the policy grants the appropriate container engine for Kubernetes permissions. To perform Kubernetes operations on the cluster, you must be able to run the Kubernetes command line tool which is Kubectl. You can use the Kubectl installation included in the cloud shell or you can use a local installation of Kubectl. You must always set up your own copy of clusters KubeConfig configuration file. Let's take a quick look at some networking in configuration. At the time of cluster creation, especially when you're using the API or you're using the custom creative workflow. You can provide a custom cider block for the pods and services. Let's look at what this is and how this impacts the cluster. The Kubernetes spot networking ensures that every part in the cluster gets an IP address. With OKE, you have this ability to provide cider blocks from where these board IP addresses are allocated to the ports and services. Here on the screen, we can see that the portsider block is set to 10.244.0.0 slash 16. That gives us a total of 65,000 IP addresses. We can also see from the image that the OKE has assigned slash 25 for each of these nodes. We are dividing the slash 16 cider block for the ports into slash 25 blocks for each of the notes. Using that, we can get up to 512 such blocks of slash 25 from our slash 16 larger block. This means that we can provision 412, notes in our cluster before exhausting the portsider block. The sets a limit of 412 notes in a cluster, when we use the default configuration. To support more notes, we need to choose a larger cider block for the portsider when we are creating the cluster. Remember that the portsider block cannot be changed once the cluster is created. We need to think and plan for this ahead of time. While on this topic of networking, let's also look at some design consideration or best practices in general for our VCN subnet design. The design of the VCN itself is important to OKE and pretty much for any other workload that you deploy on the cloud provider. You should ask yourself what needs to be on this VCN, besides the cluster. Other VM based workloads, other databases or maybe you have multiple clusters and they only to communicate with each other. These are all elements that factor into that design. An OKE cluster needs at least two subnets to function. One of the subnet is for the worker nodes and the other one is for any load balancer that you might create. For instance when you deploy a service of type load balancer. However, as you can see in the screen we recommend one more subnet, the Kubernetes endpoint subnet. This is where the Kubernetes API and points will be placed. Remember that each subnet can have a security list that controls the ingress and egress traffic. Putting the Kubernetes API endpoint in its own subnet ensures that it can be protected with dedicated security list to restrict access. The Kubernetes API endpoint subnet only requires a small cider block since the cluster only requires one IP address in the subnet. You can also segregate the worker nodes into multiple subnets. If you want to design it that way you'll notice that in this example on the screen right now. All three subnets that we have created our public. This means a few things. Firstly, the API endpoint is reachable from the internet, it's a public IP is attached to it. That means you can communicate with the cluster using the clusters API endpoint using clients like Kubectl, regardless of where you are. Kubernetes will still authenticate you but it is reachable from anywhere to anyone. You can also modify the ingress rule on the subnet to modify the security list on that subnet to restrict access to specific suicides'. Secondly, this type of a design means the load balancer is also reachable from the internet or specific suicides'. This is useful when you're deploying an internet facing web application or one that is exposed to the user outside the VCN in general. The worker nodes are also visible in this use case. You can have public IPs for the worker nodes. This is again useful when you want to ASSH directly into those notes very easily so that you can either perform some troubleshooting or maintenance or run some stuff there. This configuration is one of the simplest and might be good for several use cases. However, having everything public might expose more of your cluster then you potentially want to or need to. Let's look at a small variation in this. Here, in this type of design, the worker nodes of the subnets for the worker nodes are private. So they are not reachable from outside the VCN. This is useful when you want to restrict access and visibility of your worker notes. In this configuration, your VCN should also have the service gateway and NAT gateway. So that the worker nodes can reach out to external locations like image repositories. To pull down the docker images or access other OCI services more efficiently. To assess into the notes, you will also need a bastion, host that you can jump through to reach your notes. The security rules on the subnet for the worker notes should ensure that the worker nodes can still access the control plane and the load balancers can access the worker nodes hosting the porch. Let's look at one more variation for this. In addition to the worker Notes, the Kubernetes end point is also private now. This means that the Kubernetes API server is only available within the VCN. So external sources cannot communicate with it. This is typically in cases where you want to limit the accessibility of the API server itself and manage that access through a bastion or some mechanism like a CICD pipeline that is internal to the VCN. As before, the route tables will now need to include parts to the service gateways and NAT gateways for the Kubernetes API endpoint subnets. Now we are not simply relying on the API server authenticating, as we are limiting who can see the server itself? Inherently, this leads itself to better security practices since we are reducing our rate surface. Since the load balancer are still exposed, they can still expose applications to the internet. Now finally, let's take a look at the design where all these subnets are private. In this configuration, the cluster can be accessed from within the VCN and the applications deployed are also restricted to within the VCN. This is suitable for workloads that are essentially internal and private. It is also suitable when applications are run in a private VCN and select services are exposed to the outside. Preparing that VCN with another VCN that might act as a broker or more complex networked topologies. Let's take a look at the required policies in order to work with OKE. To create update and delete cluster and node pools, users that are not members of the administrator group must have permissions to work with the cluster related resources. To give users the necessary access, you must create a policy with several required policy statements for the groups to which those users belong. The policy statements mentioned on the screen are required to enable users to use container engine for Cuba Nitties to create update. Delete cluster and node pools in the policy statements you should replace location with either tenancy. If you are creating the policy in the tendencies route compartment or you should replace it with compartment followed by the compartment name. If you are creating the policy in an individual compartment. Also make a note that allow service okay to manage all resources in the tenancy. This particular policy must be said in the root compartment of your tenancy. These policies on the screen are required when you're using the custom create option while creating the Oracle Cuban artist engine clusters. The policies required to create a cluster using the quick great workflow are given on the screen. A similar thing to note is to allow service. Okay to manage all resources in the tendency and this particular policy must be set in the root compartment of your tenancy to wrap up. We looked at the various networking considerations and we talked about okay ports and service cider blocks. We also saw how they can impact the total cluster size. We also looked at a set of submit configurations that one can choose while creating the OKE clusters. Finally we also talked about all the required policies in order to create and manage OKE clusters I hope you found this lesson useful. See you in the next one. Thanks for watching.