Let me show you one particular implementation of local linear filters used to extract binary images or some grayscale image, as it's called edge detection. When these filters detect the edges in objects. It's a really important operation that it will be used in many computer vision algorithms in our future. In this particular case, such operations implement their derivatives of the input function. But again, let me remind you that the convolutional linear filter tries to find the parts of input image which are more or less close to the input kernel. Let me show you, for example, the output of the filter using this left kernel. This case, you see that the output of this filter will be maximal if we have some vertical edge here. We see the pixel in the middle and the weight in this pixel is equal to zero. But you analyze the pixels in the left part of the neighborhood and then the right part of the neighborhood, and the highest possible value of the output will be equal when the left pixels are equal to 255. Something like white pixels and all the right pixels are equal to zero, something like zero pixels and vice versa, so we have some vertical edge here. This filter, the output will be high if it detects some vertical lines, vertical borders, or vertical edges. The same for another derivative in the Y dimension in column. You see again, it will be the output will be high if all top pixels are equal to 255 white color and all the bottom pixels are equal, for example, to zero and all pixels in the middle of them is absolutely arbitrary because we have zero weights here. Here is the outputs of the original image using so-called Prewitt filter, three by three Prewitt filter, described by the kernels in the top part of this slide. You see here that the vertical part of this Prewwit filter detects all the horizontal lines and all the horizontal part of the Prewitt filter detects all of the vertical parts here. You see the output is really nice. There are other similar filters and you see that all the edge filters have also nice properties, the sum of weights of each filter is equal to zero. Let me recall that the sum of weights in the smoothing filter is equal to one, and so the edge filter is high pass filter and the low pass filter is determined by Gaussian filters or something like this. There are other equations to define the first derivative X and Y axis. Here is another question of Sobel filters, which highlights the central part of this neighborhood. The value of left and right pixel is much more important for the output of the first kernel here when compared to the diagonal pixels in this neighborhood, you see that value corresponds to the left and the right pixel is equal to two and minus two and all the rest are equal to one and minus one. The importance of these pixels' left and right neighbors is twice higher when compared to the importance of diagonal filters. But it's also typical not to update two different outputs for one X derivative and one for y-axis, but it's typical to compute so-called gradient amplitude and combine these outputs of these filters into one particular image. There are two typical operations to compute the amplitude of gradient. One of them is conventional. When we use the left equation here when the amplitude is defined as a square root of the square of the outputs of the X and Y derivatives, but it's rather slow operation. In many cases, it's approximated using the right equation as a sum of absolute values of X and Y derivatives. Anyway, you can use this gradual amplitude and normalize it that always will be between zero and 255 by using normalized operation from the point wise image processing from the previous week. We can obtain such a really nice picture which highlights only the edges inside the image and which is more or less equal to zero on the constant part of the input image. There are other filters like Scharr filters in which the importance of the middle part is even higher, but you obtain the same result, and it's typical to use Sobel filters in practice. But Scharr filters are also used, but not in so many scenarios like a Sobel one. This was about the first derivatives. But anyway, we can compute the 2nd derivatives and combine them together. We have original image and we can use, for example, Laplacian operator for this original image and in this particular case, in three by three neighborhood, you see the values of the kernel of Laplacian. Again, it's high pass filter. You see that the sum of weights is equal to zero. But in this particular case, you try to compute the difference between the input pixel, central pixel and the neighborhood, and the top bottom left and right. But anyway, it's edge detector and you see that it's a good edge detector and you don't need to compute here, for example, to compute here Gradient magnitude or something like this. It's also typical to compute not a Laplacian but Laplacian of Gaussian filter. You have a Gaussian filter and you compute the Laplacian of this Gaussian filter, and it's so-called LoG filter. This is the equation for this filter here. You see that the output, speaking about the output, the quality of output in detecting the edges is much better when compared to just Laplacian filter. Probably remember that we can see the Laplacian of Gaussian in the pair of our Fourier transformation. We saw that there were some really nice pair for Laplacian of Gaussian. So it's this output of this filter. It can be also computed using convolution theorem if the size of this filter is large. It's also well known that this kind of filter can be represented and can be approximated as a difference of two Gaussian filters and in many cases it's much faster. Let me show you the most widely used edge detector. It's called Canny Algorithm, and it combines previously discussed image operations in this and last week. You have an input image. At first it's smoothed using some Gaussian and then we compute the gradient magnitude using, for example, Sobel edge detector. Curacy is some result of this gradient magnitude, but it's not perfect. You see that there are more or less the borders are more or less visible. But this borders has rather high wheat, for example, so that it's not ideal. The first and the next step of Canny algorithm, the so-called non maximal suppression, is used. What is the operation here? You see, that we reduce the width of each detected by Sobel filter, by using this oppression on maximum suppression, how it's implemented in this particular case. First of all, you compute the gradient orientation and then you analyze points for each detected edge for each point is the detected edge you analyze its neighbors by using the orientation of this gradient. So you move from one point of edge to another and you try to remain only such kind of points for which the gradient magnitude is higher when compared to the gradient magnitude in its neighborhood. In this case, you see that the output will be thinner, so the width of each line is lower and in general the edges more. are more observed here. The implementation of non maximum suppression will be used in many, many computer vision algorithms and particularly in object detectors using neural networks, for example. We will show you some details in future courses. But again, such kind of result is not ideal. So let me show how to process it further by the Canny operators. You have two kind of thresholds to create the binary image using the output of non maximum suppression. For the first threshold, we are pretty sure that the point which has gradient magnitude higher than this threshold, will be a part of an edge. We'll just say that, okay, it's a part of an edge and this pixel will be part of the image. But we have another threshold for which we say that we are pretty sure that this pixel with very low gradient magnitude is a part of some object. It's not an edge. We have some part between these two thresholds. There are some parts of the output image which for which we don't know if they are the part of the edge or it's not a part of an edge how to deal with it. We will use so little gold hyteresis thresholding and probably you can remember it from our previous week. We used hyteresis thresholding for image binarization. But again, it's the same operation. So we use hyteresis thresholding for image binarization, how we use it, we analyze the neighborhood pixels and near the pixels for which we are pretty sure that their edges and in this case will say that, okay, if these neighborhood pixels are between these two thresholds that we say that it's probably part of edge and we improve and increase the length of an edge using this hyteresis thresholding. Here is the output of this Canny operator is widely used in many computer vision algorithms, and I will show you how to use it, for example, in detection of some primitive objects like lines and edge, like lines, circles and so on and so forth.