In this video you'll learn about some alternatives to GANs. In the previous video you saw some of the disadvantages to using GANs, in this one you'll see how other generative models address these downsides, but have different trade-offs of their own. Specifically, I'll discuss another popular model called VAE that you might already be a little bit familiar with from previous weeks, and then other less popular but still very cool alternatives. A generative model can be any machine learning model that tries to model this P of X given Y. Or if it's really just modeling that one class it's probably P of X of that data, and often it'll take in some kind of noise for that stochastic city so that you don't generate the same thing each time. And that just means variation in its outputs and a class too, then output features or objects that represent that class X. Generative models include much more than just GANs so let's dive right in. Variational autoencoders or VAEs is another large family of generative models. And as a reminder from week one of the specialization VAEs work with two different models an encoder in a decoder. And it learns by feeding real images into that encoder, finding a good way of representing that image in this latent space perhaps here. And then taking that link representation or representation close to it and reconstructing the realistic image the encoder saw before with this decoder. And what I just described is largely the autoencoder part of VAE and the variational part is a little bit more complex. But enables training the model in a way to maximize the likelihood of generating the real data or images like the real data that go into the encoder here. And so at a high level VAEs is try to minimize the divergent between the generated and the real distributions. And this is often regarded as a slightly easier optimization resulting in stable or training, and this is also contended a blurrier results or lower fidelity result. And after if you train the VAE you actually loop off the encoder just like you don't need the discriminator and you use the decoder similar to the generator you sample points from your latent space in your able to generate an output image. So if you remember the pros and cons of GANs, VAEs are more or less flipped. So VAEs typically have been seen as producing lower quality results than GANs, or not the first and reducing realistic results. There are definitely behind GANs required a little bit more engineering and changes, but they have density estimation. They can invert easily because they have that encoder to try to find that latent space representation. And it might not be a perfect inversion, which means it's exactly 1:1, but it is something that will get you a decent noise vector. And training is also more stable and reliable, though arguably it's fairly slow. But the GANs camp will say, well, all that is great, but that's no use if you can't generate good samples, which is here, done, done, done. As a result, a lot of work has been put in to make their results better, and so here's an example of a very recent VAE called VQ-VAE2 on the left and BigGAN on the right. And you can see the quality of BigGAN is slightly higher, but VAEs are beginning to have better results as well, particularly in diversity here, as you see in this generated fish. Also this VAE esque model the VQ-VAE2 borrows many concepts in VAES, but it actually isn't considered a pure VAE solution. In fact, it relies on an autoregressive network component too, and what an autoregressive model is. It's a model that looks like previous pixels to determine the next pixel. So maybe it sees a few pixels here and then it's able to determine the rest of the pixels for that image. And this is another type of generative model, and it goes pixel by pixel based on the previous pixel, and so you can think of it as it's conditioning on the previous pixels or what's the next pixel. And it can't see into future pixels, so it can't see into future pixels it can only look at past pixels. If you're familiar with RNNs or language and speech models, it's a very similar to that concept as well, where you can see into the future. And as you can probably tell this model is not fully unsupervised because it depends on those previous pixels. So it is a supervised technique, meaning it will require anchor pixels to start generating, can't generate from noise. Another type of generative model is a flow model, and these are hard and long to train. But it's been a very cool new idea that's based on likelihood defined in invertible mapping between the noise in the generated image. And so obviously it will be invertible and at a high level what it's doing is from an initial simple distribution it finds sequences of invertible transformations to create more complex distributions. So assume that this is starts with something very simple. It had these invertible mappings that are represented by these arrows. It gets more complex distributions and ultimately it's able to model faces, and this is an example flow model called glow. Finally, you can also combine any of these models or ideas to form hybrid architectures to try to reap the benefits of two or more worlds. And that's just like the VAE autoregressive model that you saw previously called VAE2, and there are also plenty of GAN VAE models that apply concepts from both. You'll see some advanced ones mentioned in course three as well. So in summary, VAEs have more or less the opposite pros/cons list as GANs. Notably, the results are generally blurrier, though that's arguable, but it can estimate density, invert easilly, and has stable training. However, GANs have improved on these disadvantages in many ways as training a stabilized greatly and approximate inversion, which is what you need for editing and image has been reduced to an engineering problem of finding your Z vector through another model. VAEs have also come along way to getting better results, so all in all when it comes to applications, I'd say GANs are still more useful when realistic generation is the main goal. As you saw in this video, other alternative generative models include the autoregressive model, flow models and also hybrid models of all of these. So now that you've learned about other generative models, you'll explore some issues present across all of machine learning that GANs and these other models certainly are not immune to.