Let's go now to our last AutoML model training. But in this case, instead of using the web UI, I will show you how you can perform all the different tasks to train and create an AutoML model but in this case from the Notebook using the Vertex AI SDK. To solve this lab, remember that in the setup I showed you that we need to create a notebook. Workbench and Vertex AI and we clone GitHub repo that was called Training and Data Analyst. The lab that we will perform now is under this repo. Follow with me how to do this lab. If we go under Vertex AI, Workbench, you will see the notebook that we already create in the setup. If I click under Open JupyterLab, remember that also in the setup I showed you that we clone a repo that was called Training Data Analyst. If we go under this repo, we go under courses, here, we go under machine-learning, deep dive to hear launching into ML solutions. From here, AutoML tabular classification model. In this case, as I said, we will use the Vertex AI Model Builder SDK. All the things that we have done in the past from the console, now, we will do programmatically using the Vertex Python Library from the notebook. Let me restart the Kernel and clear all the outputs to start from scratch with you. First of all, we need to make some installations, why? Because we need to be sure that we have installed the latest version of the Vertex AI Client Library and the Cloud Storage library. This is what we will do here. We will install these two latest library. Keep one thing into consideration, once we already have these two cells, run and finish, we need to restart the kernel to have this installation working correctly so this is what I am doing here. I am restarting the kernel, and here we will see this message to restart the Kernel. Now we want to train and create an AutoML tabular classification model under Vertex AI using the Notebook. We need to specify under which project we want to create this model, we need to specify our project ID. To be able to know our project ID, we have this command that this command will provide to us, which is our project ID, because this value is stored in an environment variable that is called Project IP. If we run this cell here, you see the name of my project ID. To show you that is exactly my project ID, here you can see my project ID, but we are obtaining this value programmatically using the G-Cloud command because this value is stored in an environment variable. This is our project ID. Because we will create different folders to store all of our different assets and we can run multiple training jobs or multiple deployment jobs, we want to attach always the timestamp to differentiate between different runs. Now, we need to specify a Google Cloud Storage bucket and then our project. Remember that in previous labs, when we work with the AutoML BPO recognition model, we create a Google Cloud Storage bucket with the name of our project ID to store the batch job prediction result. Here, I will specify exactly the same bucket and this variable. This is why if here I run this command make bucket, it will give us an error, why? Because this bucket is already created. We create before. If we run this command to list all the different files that exist under this bucket, we only have the predicted results folder that contains the predicted results for our batch job when we work in the past lab with the AutoML Video Recognition model. What I will do now? Remember when we're working with AutoML, we create first managed dataset. In the past, we create the managed dataset using the web do I but in this case, we want to create these managed data set programmatically. The structured data will be in a CSV file. First, we need to be sure that we have this CSV file under a bucket under our project and this is what I am doing here. In this case, we will create a classification model to predict if based on a lot of features related with a pet, if this pet will be adopted or not. This data is already in a Google Cloud Storage bucket that we have access to it so I want to access to this bucket and copy this CSV file into my own bucket. Good. Now let's run the job. For sure we need to import the vertex SDK. You can see here how we initialize the vertex AI and then our project. Remember, before we create a managed dataset using the web UI, now we want to create a managed dataset programmatically. We are using the vertex AI SDK and here we want to create a managed dataset. Here, we assign the name that we want to give to these managed dataset. Here is the Google Cloud Storage bucket that contains the CSV file. That will be the data that we want to import to create these managed dataset. Let's run this job, okay, and here you can see that we are creating this tabular dataset. Now, this is running and now this job is finished. Let's see what's happened. If we go under vertex AI and the datasets. This is the managed datasets that we have available in the vertex. You can see here that we have already created their conchita pathfinder tabular dataset. As I said, in previous labs, we do all these job with the web UI with clicked buttons. Now we are doing that programatically good. Once we have already the managed dataset created, now we need to launch our training job. To launch our training job will be a tabular training job. Remember when we did with the web UI, we need to specify if is a classification or a regression problem in this case, classification. Here, we will assign a name for this job. Okay? Here I specify which are the different features that we will use to train our model. These are all the different features that we will use to train our model. Once we defined all of that, when we will run our training job, we need to specify which is the target column, which is what we want to predict. In this case is the column that is called adopted. Here we want to use from our dataset 80 percent for training, then for validation, and then for testing. I want to call this naming consider adopted prediction model. I want to have early stopping enabled. This is why I put two false. The option disabled early stopping. I want to run this and what is happening now is that a training job starts. If we go under vertex AI and we go under training. Here, we have training 10 seconds ago. They consider train Pathfinder AutoML. Now we need to wait a few minutes until this training job is finished to then deploy our model. Again, what we will do in this lab is, instead of deploying the model using the web UI we will deploying the model programmatically using the vertex SDK, and you see that is super easy to do it. Also, we can see and we can monitor the state of the training job from the notebook. Here we can see that the pipeline state is running later we will see that the pipeline state will be finished. Let's wait few minutes until this training job is finished. Great. Our training job already finished. If we return to our notebook here, we can see that this cell already stop because finish their training job and now it's time to deploy the model from the SDK. Here we want to deploy the model that we just create that is under this model object. We specify under which kind of machine we want to deploy this model. In this case, we'll select our N1 standard for machine. We click here and what is happening now is that now we are deploying this model into an endpoint. Once our model is deployed to an end point, then we will be able to test our model making an online prediction. If we go to the vertex AI console. Now, if we go under end points, here, you see they consider adopted prediction model endpoint. Because from the SDK, we are creating this endpoint. Let's wait until the job finish to already attach the model to this end point. Once these job is finished, then we will be able to test our model, making an online prediction. Let's wait a few minutes while we are finished deployment and then we'll test our model. Great, our deployment job already finished. If we go under vertex AI and the end points before, while the deployment was working under models, we have here as zero. But now we already have a model attached to this end point. Here we can see the model that is ready and is attached to this end point. In this case, because we only have one model deploy, all the traffic will go under this model. Remember that now it's super-easy, creates some AB testing or canary deployment. If we have more than one model deploy, we can split 20 percent of the traffic for one model and for example 80 percent to the other model. Now that the model is already deployed, let's predict with our modeling. Let's send our prediction job in which we will send these features and our model needs to predict the target. The target will be predict if this pit with the features will be adopted or not. The model predict a higher score that the pet will be adopted. Only to mention that to create this lab, we are working with a small sample of this big dataset, that is the Pathfinder, my adoption prediction dataset that you have available in current. With that, do finish your lab of creating an AutoML Tabular model using the vertex AI SDK. See you in the next lab. Thank you.