Another way to control the outputs produced by gans, is after it's been trained, or more generally, controllable generation. While conditional generation leverages labels during training, this video will focus on controlling what features you want in the output examples, even after the model has been trained. You'll learn about controlling specific features, and how it compares with conditional generation you learned about in the previous video. Controllable generation allows you to control some of the features that you want in your output examples. For instance, with a gan that performs face generation, you could control the age of the person's looks in the image or if they have sunglasses or the direction they're looking at in the picture, or they're perceived gender. You can do this by actually tweaking the input noise vector Z, that is fed to the generator after you train the model. For example, with the input noise vector Z, maybe you get this picture of a woman with red hair. Let's say you tweak one of the features from this input noise vector here and Maybe now you get the same woman but with blue hair this time. Maybe its because this first element here, represents changing hair color. That would be super cool. You'll learn exactly how to tweak the Z in the following lecture. But first, to get a better sense of control generation, I'll make a quick comparison with conditional generation. I'll use these terms here because that's typically what researchers mean if you check out the papers, though it's not as clearly delineated. Sometimes controlable degeneration can definitely include conditional generation because you're still controlling the gan in some way. With controllable generation, you're able to get examples with the features that you want, like faces from people who look older with green hair and glasses. With conditional generation, you get examples from the class that you want, like a human or bird. Of course, it could also be, I want to person with sunglasses on as well. So far, they're a bit similar. But controllable generation typically means you want to control how much or how little of a feature you want. They're typically more continuous features like age. Conditional generation on the other hand, allows you to specify what class you want to a very different type of thing. For this, you need to have a label data set and implemented during training typically. You probably don't want to label every hair length value, so, controllable generation will do that for you and it's more about finding directions of the features you want. That can happen after training. Of course, controllable generation, you will sometimes also see it happening during training as well. To help nudge the model in a direction where it's easier to control. Finally, as you just learned, controllable generation works by tweaking that input noise vector Z that's fed into the generator, while with conditional generation, you have to pass additional information representing the class that you want appended to that noise vector. In summary. Controllable generation lets you control the features in the output from your gan. In contrast, with conditional generation, there's no need for a labeled training dataset. To change the output in some way with controllable generation, the input noise vector is tweaked in some other way. In following videos, I'll dig deeper into how exactly that works.