One Lipschitz continuity or 1-L continuity of the critic neural network in your Wasserstein loss and gain ensures that Wasserstein loss is valid. You already saw what this means and this video, I'll show you how to enforce this condition when training your critic. First, I'll introduce you to two different methods to enforce 1-L continuity on the critic, namely weight clipping and gradient penalty. Then I'll discuss the advantages of gradient penalty over weight clipping. First recall that the critic being 1-L continuous means that the norm of its gradient is at most one at every single point of this function. This upside down triangle is assigned for gradient, this is the function, perhaps F is the critic here and X is the image. This just represents the norm of that gradient being less than or equal to one. Using the L2 norm is very common here, which just means its Euclidean distance or often thought of as your triangle distance of your hypotenuse. This is the distance between these two points not going this direction. It's this hypotenuse. Intuitively in two-dimensions, it's that the slope is less than or equal to one. At every single point of this function, it'll remain within these green triangles. Two common ways of ensuring this condition are weight clipping and gradient penalty. With weight clipping, the weights of the critics neural network are forced to take values between a fixed interval. After you update the weights during gradient descent, you actually will clip any weights outside of the desired interval. Basically what that means is that weights over that interval, either too high or too low, will be set to the maximum or the minimum amount allowed. That's clipping the weights there. This is one way of enforcing the 1-L continuity, but it has a way to downside. Forcing the weights of the critic to a limited range of values could limit the critics ability to learn and ultimately for the gradient to perform because if the critic can't take on many different parameter values, it's weights can't take on many different values, it might not be able to improve easily or find good loop optimal for it to be in. Not only is this trying to do 1-L continuity enforcement, this might also limit the critic too much. Or on the other hand, it might actually limit the critic too little if you don't clip the weights enough. There's a lot of hyperparameter tuning involved. The gradient penalty, which is another method, is a much softer way to enforce the critic to be one lipschitz continuous. With the gradient penalty, all you need to do is add a regularization term to your loss function. What this regularization term does to your W loss function, is that it penalizes the critic when it's gradient norm is higher than one. I'll dive into what that means. The regularization term is as reg here, which I'll unfold shortly. Lambda is just a hyperparameter value of how much to weigh this regularization term against the main loss function. In order to check the critics gradient at every possible point of the feature space, that's virtually impossible or at least not practical. Instead with gradient penalty during implementation, of course, all you do is sample some points by interpolating between real and fake examples. For instance, you could sample an image with a set of reals and an image of the set of fakes, and you grab one of each and you can get an intermediate image by interpolating those two images using a random number epsilon. Epsilon here it could be a weight of 0.3, and here it would evaluate one minus epsilon would be 0.7. That would get you this random interpolated image that's in-between these two images. I'll call this random interpolated image X hat. It's on X hat that you want to get the critics gradient to be less than or equal to one. This is exactly what's happening here. The critic looks at X hat, you get the gradient of the critics prediction on X hat, and then you take the norm of that gradient and you want the norm to be one. Here it's simpler to say, "Hey, can I get the norm of the gradient to be one as opposed to at most one?" Because this in fact is penalizing any value outside of one. The two here is just saying,"I want the squared distance as opposed to perhaps the absolute value between them, penalizing values much more when they're further away from one." Specifically, that X hat is an intermediate image where it's weighted against the real and a fake using epsilon. With this method, you're not strictly enforcing 1-L continuity, but you're just encouraging it. This has proven to work well and much better than weight clipping. The complete expression, the loss function that you use for training again with W loss ingredient penalty now has these two components. First, you approximate Earth Mover's distance with this main W loss component. This makes again less parental mode collapse and managed ingredients. The second part of this loss function is a regularization term that meets the condition for what the critic desires in order to make this main term valid. Of course, this is a soft constraint on making the critic one lipschitz continuous, but it has been shown to be very effective. Keeping the norm of the critic close to one almost everywhere is actually the technical term is almost anywhere. Wrapping up in this video, I presented you with two ways of enforcing the critic to be one lipschitz continuous or 1L continuous, weight clipping as one and ingredient penalty as the other. Weight clipping might be problematic because you're strongly limiting the way the critic learns during training or you're being too soft, so there's a bit of hyperparameter tuning. Gradient penalty on the other hand, is a softer way to enforce one Lipschitz continuity. While it doesn't strictly enforce the critics gradient norm to be less than one at every point, it works better in practice than weight clipping.