Welcome to our next video. In this video, we are going to be talking about troubleshooting and fixing our applications that we tried to deploy onto Red Hat OpenShift. Troubleshooting an application is definitely something that every developer spends a lot of time on. Every developer makes a lot of mistakes when they try to deploy their application whether be it typos, bead, something not working as they would expect. This is definitely something to be expected. We thought that we would just take a look at some of the techniques that might help you to hone in on where the issue is and what are some of the common issues that you can encounter while deploying your applications on Red Hat OpenShift. For example, very common way of deploying your application is by using the S2I process. In this course, we have used S2I almost exclusively. There's nothing wrong with S2I. S2I is a very simple way to take your source code and deploy it onto OpenShift. So if you are using S2I and we are going to take a look at S2I into some depth in future, it is useful to understand that there are actually two steps going on in S2I and we have seen that in the course so far there is the build step and there is the deployment step. The build step takes your source code and it compiles it. It downloads all of the dependencies and packages and pushes the resulting image, compiled image into the internal Red Hat OpenShift Container Platform Registry. This is something that we have the build config resource that does for us, so this is the built process or built step. Then we have the deployment step. The deployment step takes the resource that the build step created. This is the already compiled application. With all of the dependencies, it's packaged and it tries to start the application port. It tries to make the application running. This is quite useful to understand because this means that there are two places where you can encounter an issue. For example, did you make a typo in the package.json file? If you did, where would your S2I process fail? Well, it would fail in the build step. Because the build step would try to download all of the dependencies, package.json would not combine, like would not be readable by, for example, npm, and therefore the build step would fail. On the other hand, what if you make a tag bowl in the start script of the package.json file. For example, your package.json file is correct, syntactically correct. But the start script does not start your application. Well, in that case, the build step completes. But the deployment step will fail because the deployment step will actually try to issue npm start. The npm start will not execute because you made a typo in the start script. It is useful to understand which step did your application fail at and try to debug and know where to hone in on which type of an error. We can inspect these errors in both the web console as well as with the CLI, with the OSI command line utility and in the web console. For example, if the application, the S2I process fails in the first phase, in the build phase, we see that there is this type of an error. If the build process exceeds, we see that there is this kind of a checkmark, green checkmark. In the web console, this signifies a success or failure of the build itself. This is the first step of the S21 process. Then there's the second step, the deployment step, so we take the artifact from the build process and try to deploy it if the built process is successful. So, notice that both of these have green check marks. The first step was successful, and then we try to deploy the artifact. If the artifact can be deployed, then we have this blue ring around our application. If it cannot be deployed, then we have this red ring around our application. Note that if we scale down our application, and you could have seen that in our previous guided exercise, this ring is just white. It's just empty to signify that there are no pods. This just tells you very basic information on where the issue occurred, in which step of the S2I process, and then you can actually click the application. You can click over here, and we can hone in for example, if we have a failed build, we can see that over here, then we see that the build has failed and we can view the logs. That logs are extremely important in any debugging because they provide us with the error message, and the error message usually tells us, for example, it could not parse package.JSON file. We know the package.JSON file, there's something wrong with it. I as the developer, have to go and check the package.JSON file, for example, or vice versa could not download the dependencies. That tells you that there is something wrong, for example, with the Internet connectivity and the package.JSON might be fine, but there might be some transient issue with the internet connectivity, and that if I retry the build, it might succeed. Or for example, that dependency that I want to download is not in the registry that my OpenShift cluster is trying to go. This is the build log, and then we have our successful build. This is the second case, so the build was successful, but our deployment phase is not a success. The deployment phase fails, and so then in the same manner, we can click our application, the port is in the error state, and we can take a look at the logs of the port as well. Again, very important to see why the port is in the error state. For example, we might be accessing something that we don't have permission to access, or we might be expecting something that is not there, for example, some variable or something else. What happens when our S2I process finishes as it should, but our application does not behave as it should? So, our application builds and deploys, our port runs, but the application has some bugs in it. Well, we can actually access the running port. We have done that actually in the past. We can open a terminal Window into that port. This helps us to debug some issues, so for example we can see whether the application has everything that it needs. So, this could be one cause for bugs. A different thing that you can do, is actually you can connect a debugger. You can connect a remote debugger tier application, for example, Node.js and Java supports remote debuggers, where you can actually debug your application that runs in the Cloud while using your local IDE. This is definitely outside of the scope of this course, but it's good to know that this possibility exists in case when you can't deploy the application locally. Now, returning to the console, to the terminal application that you can open in the port, this is also extremely useful. For example, for some debugging of network connectivity, for example, your OpenShift might experience some network partition. One node of your OpenShift might not be accessible to a different node. Or for example, in your organization, there might be some security constraint where one application isn't reachable to the other application, etc. You might be able to verify that one application can reach different application by using terminal window. You might be able to see what the OpenShift environment looks like from the application perspective. This can lead to somewhat of advanced debugging, this terminal window. But again, it's very useful to know that it exists. You do not need the OpenShift web console for the terminal window, you can actually use the OSI command-line utility to connect to your pod. You can also actually copy files to and from your pods in the OpenShift cluster. This is also very useful to know. You can use the OSI command-line utility, for example, to download a log file from your pod, and then you can explore it on your local environment. We have R, we can access our running the pot. That's perfectly fine. We can debug our running application on OpenShift by attaching our remote debugger. Of course, when we identify an issue that needs fix to the source code, we have to redeploy the application. If you modify the source code, you have to rebuild the application. If you deployed the application to OpenShift by using S2I, in that case, you have to start a new built, for example, and then wait for the application to redeploy, and then you can test your fix. Remember that we said that we configured the webhooks in the past. This can be quite helpful to configure for temporality. In case you are doing some debugging, you might want to configure a webhook until you are satisfied with the fix. Then you'd take the webhook down for example. OpenShift events are actually one of my favorite ways of seeing at a high level what's happening in the OpenShift cluster. Let me open up my web console, my OpenShift console. We can actually see that this is kind of a high-level logging of what's happening within your OpenShift cluster. This is in the administrative perspective. You select the home and events. Just like almost everything else, you can get the events by using the OSI command-line utility. For example, when something is fading and you don't know why the events might provide a hint as to what's going on. For example, when you have a fading application and everything looks perfectly fine within that pod you might want to see the events and you see that there is some a catastrophic failure, for example, a machine has died, or a network connectivity has been lost, or something similar. Events zoom out and give you a more high-level overview of what's going on and especially when there are some high-level failures, events are really good at showing you what the failures are within your OpenShift cluster. These are events and definitely, environment variables are extremely common way of making your application fail and this is not me arguing for not using environment variables. Environment variables are extremely useful and we have seen environment variables. We have used environment variables in order to decouple the configuration from our application. But it's very easy to make a mistake within configuring the environment variables. Mistake Number 1 is simply not including the environment variables. Our application might expect a certain environment variable and we do not provide it. In that case, the application might simply fail and we have to see in the logs why it fails depending on the application. Some application log these fail is better than other. A very common issue with missing environment variables, or rather with environment variables, is that an application expects a certain environment variable while you configure it in a different way. For example, an application could expect something like ENV_PROD environment variable. Whereas you configure it as ENV-PROD. Suddenly, this does not work, because these two variables are not the same, of course. For example, ENV_PROD could lead to some address. Or for example, a database value or something similar. Suddenly, our application does not know where the PROD is. Extremely common tuple. Usually, it happens once or twice and then you will be quite careful. These tuples are quite easy to prevent if you have automation in deploying your applications. When you don't have automation and deploying applications, you have to be careful in configuring the environment variables. For example, when you have something like Helm or when you have something like OpenShift templates or even a operator that deploys your application, then you can test these quite easily and you know that your application works. When you are deploying your application pod by pod, these tuples are bound to occur. Another quite common tuple or issue is that you misconfigure the actual value. For example, for your database, you set the password to be x, so this is the DB, and for your actual application, you also set the password environment variable, but your misconfigure to be y. Both of these applications expect, for example, an environment variable that is called password, like PW, and that is all correct. However, you set the database password environment variable to be x, the application environment variable password to be y. Therefore the application does not authenticate to the database. Let me clean this up. That's why we recommend setting the value once in something like a secret or a configuration map, and simply sharing this value with the database board as well as your application board. How do we troubleshoot it? Well, again, as we looked, as we have seen above, logs, taking a look at the logs, taking a look at the behavior of the application, and taking a look at the configuration of the application. Basically, those are our three main ways of how we can troubleshoot the parameters. That's it for this lecture. Now let's take a look at how we can troubleshoot and deploy a failing application in Red Hat, OpenShift.