Welcome to our first guided exercise and in these guided exercise, what we want to do is we want to deploy a Kubernetes cluster on our local machine. In this guided exercise, actually, there are two paths that you can choose. If you have enough resources on your computer, I would definitely encourage you to install minikube on your local machine. This makes it a little bit easier for you because you have full administrator access to everything on your Kubernetes cluster. However, if you have difficulties with, for example, installing minikube, you don't have enough resources on your machine to run a full Kubernetes cluster, or for whatever other reason. You can use the developer Sandbox for Red Hat OpenShift, which is essentially Red Hat OpenShift in the Cloud. We will take a look at that as well. We'll use Red Hat OpenShift as a Kubernetes distribution in this and a following Kubernetes course. That's perfectly valid. We can use that and we can explore pure Kubernetes functions on Red Hat OpenShift, because Red Hat OpenShift is Kubernetes at its core, it builds on top of Kubernetes. That's perfectly fine. We start by installing minikube. Minikube, I would definitely encourage you to take a look at the minikube documentation as well, minikube is essentially a single binary which you put on your path and you execute. The minikube binary spawns a Kubernetes cluster. The minikube binary supports the Linux macOS as well as the Windows operating systems. We provide instructions on how to install minikube on all of the supported operating systems. But in case you run into difficulties, you might find the official documentation helpful as well. On the Linux operating system, which is what I'm using, especially on Fedora Linux. Installing minikube is extremely simple. All we have to do is install a couple of packages, which I have already done on my operating system. We install the ad virtualization, which is the group or set of packages. Then we install a set of kvm libraries or qemu, libvirt, kvm, etc. Then we a look at enabling libvirtd. This is our system d, starting and enabling the libvirtd service. Then we simply install the minikube package. Because I'm using the dnf package manager. If you are using something like Debian or something similar or Ubuntu, you might have to go to the GitHub web page or to the official documentation and take a look at how we can install minikube on the other Linux operating systems. Let me just switch to my terminal application and verify that I have successfully installed minikube. What I can do is execute minikube h and I can see that I get some output, which is exactly what we want. That is the end status of installing minikube. In the terminal application, what we want is we want to be able to execute minikube. For example, the -h, which is the help command and we want to get some output. If you get some error message, you might have to return to the steps. Then we can take a look at actually starting minikube, and that's perfectly fine. I don't have to set the driver because I already have the driver set, and the Start command is simply start. We have our minikube, and it seems that our minikube, Kubernetes cluster has started successfully, so kubectl is configured minikube cluster default namespace by default. This seems to be quite good, we have no error message shown to us. We can actually take a look at minikube status, and we see that we have our host that is running, our kubelet that is running, API server is running, and kubeconfig is configured, so this is all well and good. Returning to our guided exercise, we only have steps for installing and configuring minikube on the macOS systems, as well as on the Windows systems. Again, if you are using these operating systems take a look at these steps. Basically or essentially all we want to do is deploy minikube in such a state that minikube start deploys our Kubernetes cluster. If you're having difficulties with these steps, take a look at the official documentation. Again, the end-state with minikube is minikube start, and based on the virtualization driver, we have driver HyperV, driver virtual box, so that's all well and good, the end state should be provisioned cluster, and we can verify the status of the minikube cluster by minikube status. What we definitely need is configure our minikube cluster by enabling the Ingress extension, so we can enable the Ingress add-on, let's do that. There we go, our Ingress extension add-on is enabled, so this all seems to be well and good. Then we can enable the dashboard add-on, the dashboard add-on is not required for this course, so if you want to, you can explore the dashboard add-on, I will not enable the dashboard add-on for this course because we're going to use it throughout this course. This was our minikube installation, and if you got here to this point, that's all well and good. Actually, let me skip forward and take a look at the last step. We need to configure for our minikube configuration, and this is enabling external access to Ingress. The last step that we need to configure is essentially what we want to do is we want to configure our operating system to resolve this hostname, hello.example.com., we want to resolve this hostname to the IP address of the minikube virtual machine. We can do that quite easily, all of the supported operating systems have a host file on the Unix systems, this hosts file is located in etc hosts, and in Windows, this file is located in this location, C Windows System32 drivers etc hosts, and we have to edit this file with administrative privilege and add our entry. First, what I have to do is discover the IP address of my minikube machine. I can do that and actually let me clean this up a little bit. I can do that by issuing minikube ip, and so this is the IP of my minikube machine. Then what I can do is I can edit the host's file, so I'm using Linux and therefore on my machine, it is the host, and remember you have to edit the file with administrative privileges. On Unix machines, this is done with sudo, and here we can edit this file. The way we edit this file is by adding our IP address at the end of this file. We add our IP address and then add our hostname, so for us it was hello.example.com, and that's it. That's all the modification that we need, and then we can save and quit this file, and that's all that we need to do. The same should apply to macOS as well as Windows. This is all we need for minikube, and if you use minikube, you are done at this point. If you want to use the developer sandbox, if you have difficulties with minikube or for whatever reason, if you also want to just explore the developer sandbox, you can take a look at the developer's redhat.com address, and here we have our developer sandbox, so we actually provide the address in the first sub-step. On this URL, we need to log in. If you don't have the Red Hat developers account, this is a free account, if you don't have this account, then clicking this button will take you to a login page where you will need to register. I'm already logged in and so clicking this button will take me to my OpenShift Sandbox. In the sandbox, I am logging into the Red Hat OpenShift dedicated environment, and I can log into this environment, and I'm presented with Red Hat OpenShift user interface, this is the console interface. Essentially at this point, we are done. If we see this environment, that's it. All we need is this environment because Red Hat is running it for us. We provide steps on how to log in and register, and how you get essentially to this page. The one thing that you should be aware of is that of course, you are not an administrator in this environment and so you cannot create more projects than our pre-created for you, so we only have two projects available for us, we have the dashed F and dashed straight projects. Whatever resources we create in these projects, we will have to delete them. This is one of the disadvantages of using this Cloud environment because we don't have administrator access. Last but not least, enabling external access to Ingress. While we configure that from minikube for OpenShift dedicated, we will get to that in later stage when we will use our Ingress. At this point hopefully, we'll have installed our Kubernetes cluster, and we are ready to take a look at kube CDL and take a look at how we can control our Kubernetes cluster. I'll see you in the next video.