Now that you've had a look at what autoencoders are, let's take a look at building a very simple one. We'll have a three-dimensional set of data plotted on a chart, we'll then use an autoencoder to learn a 2D latent representation of that data, which we'll plot in two dimensions. Then, from this data we'll attempt to reconstruct something closer to three-dimensional regional using a decoder. Let's consider the autoencoder that we saw earlier. It has 3D data as an input, X1, X2, and X3. When the data is encoded, it will lose a dimension and be encoded into a 2D representation. When it's decoded, it will be returned to a 3D representation. But of course, given that the dimensions were reduced in the previous layer, it will be an approximation that was learned from the training data. Let's explore this. This image is a plot of our 3D data. If you look closely, it's roughly circular or horseshoe shaped. The values are randomly scattered on the z-axis. If we encode with two neurons as we saw on the previous architecture, and then, look at the encodings, we can see something like the image on the right. Despite the data being reduced to two dimensions, the rough horseshoe shape was maintained. When we decode from this, we should expect something that looks similar to the image on the left. But of course, information has been lost when the dimensions were reduced. We'll get this, where you can see that the rough shape of the data was maintained, but the approximation lost a lot of the noise in the input data and regularized it somewhere. Still, it's pretty good replica of the original producing a 3D output from the 2D latent representation that was learned by the network. Creating this encode is really straightforward. We've two dense layers, one for the encoder and one for the decoder. We can create the encoder as a single dense layer with two neurons and an input shape of three. As you can see in the diagram, the layer has two neurons and their three values input to it is called X1, X2, and X3. Similarly, we'll create the decoder as a dense layer with three neurons, it's also our output layer. We specify it to have an input shape of two because it's designed to decode the 2D data. Later, we can call this with data to see if we can reconstruct what we had encoded. We then define our model as a sequential of these two layers and compile it. Training with the autoencoder is a little unusual. You're typically used to fitting X to Y values. While that's what you're doing here, you're actually fitting the training data to itself. Indicated by the fact that you use x train twice in your model.fit. Now, why would that be? Recall the model. Ultimately, if you give it input values of X1, X2, and X3, the best reconstruction would be if Y1, Y2, and Y3 were equal to X1, X2, and X3. By setting the training x and training y to be the same value, you'll get that impact. Now, if we want to get the 2D encoding of our 3D data, we simply call predict on the encoder. If you want to see what it's like for our training set, you can simply replace data with x train or whatever you called your dataset. Similarly, if we want to get our 3D prediction, we can call predict on our decoder passing it 2D data, just like our codings. Now, that was a very simple autoencoder, but hopefully it demonstrates the overall concept where the model can learn a latent representation of your data that could be used to reconstruct a replica of that data. It worked with a very simple 3D data. But what happens if your data is more complex like, for example, images? Well, in a similar manner to what you did when you first started learning DNNs for image classification, the next step is to have multi-layered networks to help encode and decode more complex data. In the autoencoder world, these are referred to as stacked autoencoders and you'll explore them soon. But first, check out the Colab for this simple example and then play with tweaking the parameters such as the function that generates the 3D data or hyperparameters on the network and see if you can discover any interesting and fun effects.