Next, I will talk about MLflow models and general purpose model format that supports different production environments. The motivation for MLflow models is very similar to the motivation for Projects. Models can be written using a wide variety of tools, but they can also be produced so nice or deployed in a wide variety of environments. This environment can be different from the trainining environments. These environments include real-time serving tools such as Kubernetes or Amazon sites maker and tools for streaming and batch scoring like a spark. Additionally, some organizations may want to stand up a model as a restful web server running on a pre-configured Cloud instance. For an organization that wants to deploy to real-time and to batch and is using several machine learning tools, it's tempting to write these deployment pipelines from a particular tool to a particular environment. For example, a business might combine TensorFlow with Kubernetes. Our research organization might combine Scikit-learn models with the Batch as coding feature in Spark. What they find is that, as the number of tools they are using as case and as they begin to produce the nice in new ways, the results in this kind of one-to-one mapping become very difficult to maintain. The solution to this problem of mapping ML frameworks to a different deployment environment is a Unified model Abstraction, Goal, and MLflow model that can be produced using a variety of common ML tools and then deployed to a variety of Machine Learning Environments, thus providing this intermediate layer and avoiding the one-to-one mapping problem. What does an MLflow model look like? Similar to a project, an MLflow model is also a directory structure. It contains a configuration of files and instead of containing training goal, this time it contains a Serialized Model Artifact. It also contains as a project these set of dependencies for reproducibility. This time we are talking about evaluation dependencies in the form of a Conda envronments. Additionally, MLflow provides model creation utilities for serializing models from a variety of popular frameworks in ML of forms. Finally, MLflow introduces deployments, APIs for producing your Niacin and deploying MLflow Model to a variety of services. These APIs are available in Python, Java, R, Java, unbiased CLI form. Let's look at an example MLflow model that could be produced using the convenience utility MLflow, TensorFlow Log model. By calling this function, we obtain a directory structure similar to a project. At the top layer, we have the ML model configuration file. We also have in this case a serialized TensorFlow estimator containing a graph and a collection of variables. Focusing on that configuration file, we'll see that it contains some important metadata about the specific model. In this case, the Run ID which is a unique identifier for the training session that produced this model and the time that it was created. Additionally, this configuration file contains an important field called flavors. A flavor is a language and tool specifically representation of an MLflow model. In these examples, two Flavors had been bundled with the model. We have the TensorFlow flavor and the Python function flavor. With the TensorFlow flavor the MLflow model can be load as a native TensorFlow object. For example, a TF estimator is thus or API of graph, and this makes it usable with any TensorFlow API for evaluation or continuous training. With a Python function flavor MLflow introduce an additional layer of abstraction for loading and evaluation this model. Via Python function MLflow model can be represented as our [inaudible] Python function attempting a Pandas DataFrame which means that in order to load and evaluate this model, I no longer have to reason about maitenance of the TensorFlow level. To expand on a model flavors at it, let's walk through a hypothetical example where the user trains the model, locks it to the tracking service and there's sometimes later locks it and evaluate it. The first step is to use a framework like Keras to train a model. The next step is to persist it using the MLflow Keras log model with it. This produce an MLflow model format with two flavors. The first is a Python function flavor abbreviate Pyfunc, which we discussed previously and the second is a Keras specific flavor. If I load and evaluate the Pyfunc representation of an MLflow model, the evaluation goal is very simple by invoking MLflow that flow Pyfunc, I represent this model as a by Milla Python function. Then to evaluate it, I simply passing the format of pandas dataframe and return a panda dataframe output. It's very simple code in two lines that completely abstracts away the details and codes of Keras. Optionally, users can also load the Keras specific flavor to obtain a narrative Keras object. In this case, a MLflow Keras load model. These deals are Keras model object which can be evaluated using the Keras is specific model that predict API passing and yet another Keras specific object. This highlights how ML flavors allow users to interact with their models and differing levels of structural to meet their particular use case.. I want to highlight that the Pyfunc abstraction is extremely useful when we consider the number of model creation utilities than MLflow supports. For example, models thrives in Keras, TensorFlow, Spark, Scikit-learn, Pythought, and several other frameworks are all automatically serialized with this Python function representation. Which means that those same two lines of code that I used to load and evaluate that Keras model are compatible with any model in the MLflow Ecosystem that contains this Python function format. For deployment engineers, it's super easy to write this evaluation layers, but that is small and are compatible with a wide set of models that are being developed within an organization.