Now it is time to go over the main phases of a machine learning lifecycle and map them to the components or tasks within MLOps. When we look at machine learning projects, we identify three main phases. A discovery phase, a development phase, and a deployment phase. For the discovery phase, identifying the business need in its use case allows for a clear plan of what a machine learning model will help us achieve. This phase is crucial because it will establish the problem or task that needs to be solved and how solving it will affect the business and the users consuming the product or solution argumented by machine learning. This phase is also when data exploration happens, recognizing why datasets are needed, whether the needed data is readily available and sufficient to train a model, and whether external datasets would be beneficial and how to acquire them. All of these are considerations that involve the data exploration step. Then, depending on the tests to be performed, an algorithm is chosen by the data science team. The combination of data availability in algorithm along with the decision of buying versus building the solution, becomes an important consideration for feasibility assessment where the team tries to uncover any problems that may arise during the development phase. One example is that for the specific use case and question, the data is available historically but not for inference time. In that case, the particular scenario might make the use case infeasible for ML and the more through analysis may have to be performed before the use case can be pursued further. Another aspect of the discovery phase is prioritizing the different use cases that the business has that can become potential ML projects, but that discussion is out of the scope of this course. Now, for the development phase, you may ask, how does development start on this chart during data exploration? Shouldn't we wait until the result of the feasibility study? What happens in reality is that even for data exploration and algorithms selection, some proofs of concept will need to be developed, and that is what we refer to here. After the feasibility assessment gives the go-ahead, the real development starts. All the data steps such as cleaning, extracting, analyzing, and transforming, will be implemented during the data pipeline creation. The data pipeline evolves, ensuring that all the operation is needed on the data for both offline and streaming, for training and reference also will be performed consistently to avoid the rescue. After the data is ready, building and evaluating the model begins. I say begins because these steps may need a couple of iterations until the data scientist is happy with the results and ready to present them to the main stakeholders. Considerations include, the use case should be revisited because the learning algorithm isn't capable of identifying patterns on the data for that task. Data should be revisited because the model either needs more of it or needs additional aspects and your features maybe from the existing data. Some additional transformations are needed to improve the model quality. Or even a different algorithm is perceived as a better choice. There are numerous possibilities. This iteration will happen as many times as needed until the model reaches the desired performance. After results are presented and stakeholders are satisfied with how the model is performing, it is time to plan for model deployment. This is when the following questions will likely arise. Which platform should host my model? Which service should I pick for model serving? How many loads should the cluster have so you can scale and take care of all the demand in a cost effective manner? Operationalizing and monitoring the model will allow for maintainability and avoiding model decay, as we discussed. Having a strategy in place to detect concepts of data drifts will allow signaling when the model should be retrain or data should be adjusted or argumented. Ensuring that your pipeline considers all the necessary tasks for health checks and alerts is the most effective way to avoid your satisfaction from the user's consuming your models projections. Focusing on the development and the deployment phases, we see that they have multiple steps. For data exploration, for example, that is data extraction, data analysis, and data preparation. The model building comprise a streaming, evaluation, and validation. Deployment requires hosting the train model and serving it and having a prediction service ready to handle requests. Finally, monitoring to allow for continuous evaluation and training based on the performance results at a given point. The level of automation of this steps define the maturity of the ML process, which reflects the velocity of training your model is giving you data, or training your model is giving new validations. Many ML professionals build and deploy the ML models manually. We call this maturity level zero. All the data scientists perform continuous training of their models by automating the ML pipeline. This is maturity Level 1. Finally, the most mature approach completely automates and integrates the ML training, validation, and deployment phases. This is maturity Level 2. You and your team have probably begun or still are at maturity Level 0 and that's nothing to worry about. Our goal here is to help you automate your processes and move up the automation ladder with the suite of tools and services available at Google Cloud. Stay tuned and have fun.