Hello and welcome to this lesson on introduction to OKE. My name is Mahendra Mehra and I am a senior training lead and evangelist with Oracle University. What I call "cloud infrastructure" Container Engine for Kubernetes is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the Cloud. Use Container Engine for Kubernetes, sometimes also abbreviated as OKE when your development teams wants to reliably build, deploy, and manage cloud-native applications. You specify the compute resources that there are applications require. Container Engine for Kubernetes provisions them on OCI in an existing OCI tenancy. Container Engine for Kubernetes makes use of the open-source system Kubernetes. You can access Container Engine for Kubernetes to define and create Kubernetes cluster using the console and the rest API. You can access the cluster you create using the Kubernetes command line that is Cube CTL, or using Kubernetes dashboards and the Kubernetes API. When and why to use Container Engine for Kubernetes. You can use OKE when it is too complex, costly, and time-consuming to build and maintain Kubernetes environment. Managing various components of your Kubernetes cluster, like the API Server scheduler, ETCD, also managing components of your data plane, performing in-place upgrades, deploying parallel clusters, managing your container networking and storage, these can be difficult at times. You must also use OCI OK and it is hard to integrate Kubernetes with the registry and other CICD tools for container life-cycle management. You must use OKE when it is difficult to manage and control your team's access to the production clusters. Let's understand the key benefits of OKE. The key benefit that OKE offers is that it enables developers to get started and deploy containers quickly without worrying about the underlying complexities. It gives DevOps teams visibility and control for Kubernetes management. You must also go for OKE as it combines the production grid container orchestration of open Kubernetes with control, security, IAM, and highly predictable performance of Oracle's next-generation Cloud infrastructure. When it comes to managing components of a cluster, all the master node, all the control plane node components like the Kube control manager, the kube API server, kube scheduler, etcd, and Cloud controller manager are managed by the Oracle. We make sure that there are multiple copies of these master components created across different availability domains. We also manage the Kubernetes dashboard and things like self-filling mechanism of cluster and worker nodes as well. These are all created and managed within the oracle tenancy. At the customer end, the customer needs to manage whatever worker nodes they are creating using different compute shapes as these are created and managed in the user tenancy. Let's look at the node pool in a little bit more detail. We saw that our node pool is a subset of worker nodes within a cluster that all have the same configuration. This effectively becomes the unit of scaling infrastructure in OKE clusters. Compute capacity in cluster is managed by scaling the node pools or creating new node pools. Remember, a node pool can be scaled to zero, that means a pool with no nodes in it. You can also create and manage these notebooks to meet the various demands and workload types best done from their needs. Clusters can have multiple node pools and they can be heterogeneous. For example, a node pool with four core VMs and another one with 16 core VMs and maybe a third one that is completely different, which uses a bare metal machine. On the screen here we have two node pools. One with standard X86 E3 Flex shapes, which are AMD CPU-based machines, and you can configure the number of CPUs that you have on these VMs. We also see another node pool, which is ARM-based and uses A1 bare metal machines. Their placement configuration also seems to be different in the image, with the ARM-based node pool being only deployed in availability domain 1 and 2, whereas the standard exit, the six-node pool, uses all three availability domains. A configuration like this is perfectly valid. Workloads can also be targeted to specific nodes using standard Kubernetes methods like label selectors. Let's now review some of the shapes and operating systems that are supported by OKE worker nodes. Most ships are supported with a couple of exceptions, like bare metal with RDMA or micro shapes that are provided in the always-free tier. Likewise, most versions of the Oracle Linux are supported, as well as custom images based on supported Oracle Linux versions. Custom images can be based on official images only. There are some considerations here though. The container engine for Kubernetes installs Kubernetes on top of a custom image. Kubernetes or the installation software might change certain configurations. Custom images must have access to a YUM repository, which can be a public or in private repository. Custom images must not use a customized cloud in it. You can perform both provisioning customizations, either using SSH or demonstrates. As a best practice, ensure that you create a custom image from the most up-to-date base image. Note that the console currently does not let you use a custom image. You have to use the API or CLI to create node pools that use custom images. We're always improving and expanding our support. Use the OCI command given on the screen to identify the latest supported images and shapes as we continually expand different options. Let's now consider Kubernetes version support in OKE. Kubernetes is a fast-evolving technology, and frequent upgrade cycles are part of the Kubernetes lifecycle. The open-source Kubernetes project itself is supported for three minor versions. Container Engine for Kubernetes supports three versions of Kubernetes for new clusters. That means you usually have three choices for Kubernetes version when you create a new cluster. Version support moves like a rolling window. When OKE adds new support for a new version of Kubernetes, the oldest supported version will have at least 30 days for their continued support. The console will warn you if your cluster is on our version that is soon to be unsupported. OKE sticks to the Kubernetes Project version Q policy, which is basically to allow the worker nodes that are one version or two versions behind the control plane Kubernetes version. But the Kubernetes version on the worker nodes cannot be more than two minor versions behind. They also cannot be ahead of the control plane version. To wrap up, we saw the OKE provides a highly available control plane that is managed by Oracle. Nodes are grouped into scalable groups called "node pools." OKE supports a wide and expanding set of shapes and operating systems. Three versions of Kubernetes are always supported in a rolling fashion. OKE takes to the version skews policy for the Kubernetes project itself. We will dive into more details about OKE in our subsequent sessions. I hope you like the video. Thanks for watching.