Let's take a look at the API hierarchy. Which will consist of a spectrum of low-level API for hardware, all the way up to very abstract high-level APIs for super-powerful tasks, like creating a 128 layer neural network with just a few lines of code written with a Kerri's API? Let's start at the bottom. The lowest layer of abstraction is the layer that's implemented to target the different hardware platforms and unless your company makes hardware, it's unlikely that you'll do much at this level, but it does exist. The next level is the TensorFlow C++ API, this is how you can write a custom tensor flow operation. You would implement the function that you want in C++, register itd as a TensorFlow operation. You can find more details on the TensorFlow documentation, on extending an op, I'll provide the link. TensorFlow will give you a Python wrapper that you can use just like you would use an existing function, assuming you're not an ml researcher, you don't normally have to do this. But if you ever needed to implement your own custom app, you would do it in C++ and it's not too hard, TensorFlow is extensible in that way. Now, the Core Python API is what contains much of the numeric processing code, add, subtract, divide, matrix multiply, etc. Creating variables tensors, getting the right shape or dimension of your tensors and vectors, all of that is contained in a Python API. Then there are sets of Python modules that have high level representation of useful neural network components. Let's say for example, that you're interested in creating a new layer hidden neurons within a ReLu activation function. You can do that just by using TF layers, just [SOUND] architect construct it. If you want to compute the RMSE or root mean squared error, as the data comes in, you can use tf.metrics. To compute, cross-entropy with logics, for example, which is a common plus metric and classification problems, you can use tf.losses. These modules provide components that are useful in building custom neural network models. Why are custom neural network models emphasized? Because you often don't need a custom neural network model. Many times you're quite happy to go with a relatively standard way of training, evaluating and serving models. You don't need to customize the way you train, you're going to use one of the family of gradient descent based optimizers and you're going to backpropagate the weights and do this iteratively. In that case, don't write the low level session loop, just use an estimator or a high level API, such as Keras. Speaking of wish, the high level APIs allow you to easily do distributed training, data pre-processing, the model definition, compilation, and overall training. It knows how to evaluate, how to create a checkpoint, how to save a model, how to set it up for TensorFlow serving and more. And it comes with everything done in a sensible way that'll fit most of your ml models in production. Now if you see example TensorFlow code on the internet that does not use the estimator API, ignore that code walk away, it's not worth it. You have to write a lot of code to do device placement, memory management and distribution, let the high level API handle all of that for you. So those are the TensorFlow levels of abstraction. On the side here, cloud AI platform is orthogonal, or it goes cut across to this hierarchy, it means it goes from low level to high level API's. Regardless of the abstraction level, you're writing your TensorFlow code, using cloud AI platform or CAIP gives you that managed service, it's fully hosted TensorFlow. So you can run TensorFlow on the cloud, on a clustered machines without having to install any software or manage any servers. For the rest of this module, we'll be largely working with these top three APIs listed here. But before we start writing any API code, and showing you the syntax for building machine learning models, we first really need to understand the pieces of data that we're working with. Much like in regular computer science classes where you start with variables and their definitions before moving on to advanced topics like classes and methods and functions. That's exactly how we're going to start learning with TensorFlow components, next.