Hello and welcome to this lesson on creating OKE Cluster on OCI. My name is Mahindra Mehta and I am a Senior Training Lead and Evangelist with Oracle University. In this lesson, we will look at cluster creation and management processes. Let's get started. We will start with the cluster creation process. Primarily, we had three parts to create a cluster, the quick create, the custom create these workflows that are available through the console, and you can also create clusters through the API directly or using tools that leverage the API like the OCI [inaudible] or automation tools like Terraform. The could-create workflow assumes a set of default and makes the cluster creation process quick and simple. This is great for a user starting out with OKE and experimenting with it. It exposes a limited set of options so that it does not overwhelm a new user. The custom create workflow, on the other hand, gives the user more control over creation process. Importantly, it brings the ability to use your own networking components that are pre-created. This is very common because when you start out using an OKE cluster, you might already want that cluster to be placed in an existing [inaudible]. The other controls that are exposed by this custom create workflow include encryption options for Kubernetes and SDKs enabling the choice to enable pod security policies using custom CIDR blocks for pods and services and customizing the node placement. This is in addition to all the options that are exposed by the quick create workflow, like the ability to choose Kubernetes version, or endpoint visibility, and so on. Lastly, we have the API driven or the SDK-based approach, which offers the maximum control over the creation process. This is the preferred method for large-scale production quality deployments. As the automation makes the long-term management of the infrastructure easier and much more predictable and reproducible. This is also the only method in which you can provide custom images for your worker nodes. And obviously all the options exposed by the custom create and quick create are available here. Let's look at some cluster design best practices. Creating an OKE cluster is easy as we saw, but designing one that meets your needs for now and the future needs a balancing act. There are several choices you might make. The criteria that will influence your design, including the node types, the characteristics of the workload, the operational considerations, the security posture, and the cost optimization. These all play a part. These all are vectors in our design. For instance, about node types. If you want, 96 cores in your cluster, how will you provision that 96 cores? Will you use 48 instances with two cores each? Or would you just use four instances with 24 cores each or something in-between? The design process that happens here is that with larger nodes, they will offer easier management because you have fewer nodes to manage. But if there is a failure on one node, it wipes out a large chunk of our application footprint so the failure surface is slightly higher. On the other hand, if we have really small nodes, then you might end up with too many nodes and the components that run on the nodes also require resources. You are spending a large chunk of resource on a node to run the cluster itself then for your workload. These are all vectors that play a part in that design process. When you use certain node types, you will want to manage those workloads differently. Like for instance, nodes that have GPUs attached to it, nodes that have ARM-based processor on it, or HPC shapes on it. There are a lot of workload characteristics that you need to consider when designing a cluster and there is operational consideration as well. Because you need to think about how you upgrade the cluster and how many applications you are deploying onto the cluster. For instance, think about two applications on a single cluster. These two applications are built by two different teams. You want to upgrade the cluster. You basically want to make sure both these applications are good with the upgrade and maybe one application will do their testing and they're good. But the other application is not ready to move up to a newer version of Kubernetes that's going to block your upgrade. These are operational considerations that you should keep in mind when you design your cluster as well. Obviously, there are implication to cost optimization and scaling. For instance, when you use a fewer number of nodes that are really large and you just exited your threshold for your scaling threshold, and you need to create a new instance. But the instance getting created is going to create way more capacity than you actually need. Since that unit of scaling is essentially the configuration of a node pool, that shape that you use for our node pool is very important. These are just a few of those design vectors. Real applications and real cluster design is more of an art, and you really need to consider all these angles when you decide the sizing for your cluster. To wrap up. In this lesson, we looked at the cluster creation and automation for cluster creation. We generally recommend that teams embraced the infrastructure as code practice to ensure that they have resilient, reproducible, and version control environment. Finally, we touched on some of the best practices and design vectors for real-world cluster creation. I hope you found this lesson useful. Thanks for watching.