Okay. So you've just seen how to get started with creating a neural network in Keras that uses the image generator to automatically load and label your files based on their subdirectories. Now, let's see how we can use that to build a horses or humans classifier with a convolutional neural network. This is the first notebook you can try. To start, you'll download the zip file containing the horses and humans data. Once that's done, you can unzip it to the temp directory on this virtual machine. The zip file contain two folders; one called filtered horses, and one called filtered humans. When it was unzipped, these were created for you. So we'll just point a couple of variables at them, and then we can explore the files by printing out some of the filenames. Now, these could be used to generate labels, but we won't need that if we use the Keras generator. If you wanted to use this data without one, a filenames will have the labels in them of course though. We'll print out the number of images that we have to work with, and there's a little over 1000 of them, and now we can display a few random images from the dataset. Here, we can see eight horses and eight humans. An interesting aspect of this dataset is that all of the images are computer-generated. I've rendered them to be as photo-real as possible, but there'll be actually used to classify real pictures of horses and people, and here's a few more images just to show some of the diversity. Let's start building the model. First, we'll import TensorFlow, and now we'll build the layers. We have quite a few convolutions here because our source images are quite large, are 300 by 300. Later we can explore the impact of reducing their size and needing less convolutions. We can print the summary of the layers, and here we can see by the time we reach the dense network, the convolutions are down to seven-by-seven. Okay. Next up, we'll compiler network. It's using binary cross entropy as the loss, binary because we're using just two classes, and the optimizer is an RMSprop that allows us to tweak the learning rate. Don't worry if you don't fully understand these yet, there are links out to content about them where you can learn more.