Okay, so this workbook is hosted on the TensorFlow site at this URL. You can read through the details of the lab on this page or click the colab button to try out the code for yourself. This will launch colab with all of the code you need to try out the data set and feature columns. The data set that you'll use in this notebook is not in tfds. You'll have to download the data and handle it manually, but you can still use feature columns with it. And as you can see when inspecting the data set, you'll see that it is data relating to heart disease from the Cleveland Clinic Foundation data set. The feature types are shown here, so we can see that we have numerical and categorical data as well as the classification or label. So now let's run the code. We'll start by installing the dependencies and importing the necessary APIs. In this case, we'll use Pandas to create the data frame. We can inspect a few rows of the data seeing how age and sex are implemented, for example. Now, let's get a training validation and test set from the data and print the number of examples in each set. We'll now create an input pipeline for ft.data by converting the data frame to a data set. We'll do this with a helper method called DF2 data set, which we'll load in batches. So to inspect the input pipeline, we can now go through the data set and take a look at the batches of features and labels in the data set. So we can see a list of all of the features as well as a batch of ages 35, 40, 54 etc, and their targets. If you recall from the data that the age column was a numeric containing the patient's age and the target was the classification. Now we can explore several different types of feature column. We'll start by taking an example batch from the training data set and creating a utility method that when passed a feature column will print out its details. So to look at a numeric column age, we can see what that looks like. We'll output a tensor containing the ages of the next five records. Bucketized columns allow us to then put our records into buckets for their age. So for example, if the first bucket is going to be 18 to 25 years old and then 25 to 30, etc, we can define that using the boundaries. So when I run this cell and print it out, we'll see a one hot encoded version of the ages using the buckets. For example, the first stage is in the 65 plus bucket and if I scroll back we could see that the value was 65. And similarly, the second record for a 45 year-old, shows up in the seventh bucket, which is where we would expect it to be. Thal is a blood condition, which is classified as fixed, normal, or reversible and these are strings in the database. So we'll convert them to a one hot encoded array and render it out. And now you can see that the records are one hot encoded instead of those strings. Now that the Thal are one hot encoded, we can create an embedding column for them. We'll create that with eight dimensions which is massive overkill for just three classes, but it is just a demo. When we print out, we'll see the values for the eight axis defining the vector for the embedding. Let's now look at hashed feature columns. In this case, we're specifying that our hash bucket has a thousand elements. And as there are only three val values, that's a little silly. So when we run the code, we'll see our output has the values encoded into a hash and then bucketized into the thousand. When we print each array, we'll only see the first three and the last three which means, of course, that we're not likely to see our entry. Try changing that thousand to something smaller and then you'll see the impact. Similarly, we can create a feature cross by combining age and Thal using a hash. This hash, as before, it's much too big than a thousand, so when we print we won't see much. Again, when playing with it for yourself, try much smaller values, so you can see the impact. The rest of the notebook is a good sample of taking features and mending them using feature columns and training a neural network. So try it out for yourself.