In this video you'll learn about a way to compare different images. For example real versus fake images in order to evaluate your gan. Comparing images on fidelity and diversity can be challenging, because what exactly should you be comparing? First you explore what pixel distance is, a simple approach, but insufficient. Then you'll get to compare images using feature distance, which will give you greater reliability in your comparisons. So probably these simple's approach to comparing images is looking at the differences between their pixels, and this is known as a pixel distance. So with the real image and a fake image here you can subtract their pixel values. Value from 0 to 255 in one image from the other and then get their absolute difference. So you're subtracting each pixel with the other and then you get each of their differences. And perhaps then you can sum their differences. And here the total difference is 0s. So far this looks great because these two images are identical and we get 0 out here. So they look identical and the absolute difference is 0 so they have no distance from each other, perfect. But pixel distance actually isn't that reliable. For example, imagine image shifted one pixel to the left here in the fake image. This would potentially have a huge pixel distance from the original image, or from this real image here, even though the images are really similar. Even imperceptibly different to your eye when looking at say, high resolution image with millions of pixels. So if this image were super large and it only shifted one pixel, you probably wouldn't notice anything. However, based on pixel distance, this absolute difference would then be huge. So then all of these values here would all evaluate to 150. And if you some this all together, you would get a huge pixel distance of 900. So one alternative is to look at the higher level features of your images instead of the pixels. So, does a dog have two eyes? Is the nose under the eyes is their fur? And this higher level semantic information would be less sensitive to small shifts. So instead of looking at pixels directly, you could condense or explain the image using its features, like having two eyes, droopy ears and nose. And then you can compare the images by using these extracted features, okay? Now I'm going to compare these images at what I call the feature level, so looking at their features. And this is a neat trick for comparing higher level semantic information between images. And is pretty common outside of Gans as well. So by using these extracted features, your evaluation is less sensitive to small differences in the images. And in this example, although the background of the images are different, they're both still identifiable as dogs. With pixel distance, the fake image would be really far off from the real. And you'll see more details on how to get these features as well as calculate the feature distance in the following videos. And what's interesting here is that you can see that these two images are similar in some ways, having both having two eyes. But here this guy has two droopy ears, and this one only has one drew year, and this one seems to have five legs. So that seems kind of off on the fake side, and both of them have a nose. So you can imagine a fake image that's further away from this real that only has one eye or two noses. Or one that's closer to your real image that doesn't have any legs and has two droopy ears. And so that's how you're going to be getting distance between these images. It's add this feature level. So in summary, when evaluating again comparing images with pixel distance is simple but too sensitive and unreliable. Feature distance is an alternative to that in words by extracting higher level features instead of pixels to compare your images. As a result, it's more reliable since it looks at higher level information.