As we saw previously, eigenvectors are those which lie along the same span both before and after applying a linear transform to a space. And then, eigenvalues are simply the amount that each of those vectors has been stretched in the process. In this video, we're going to look at three special cases to make sure the intuition we've built so far is robust, and then we're going to try and extend this concept into three dimensions. The first example we're going to consider is that of a uniform scaling, which is where we scale by the same amount in each direction. As you will hopefully have spotted, not only are all three of the vectors that I've highlighted eigenvectors, but in fact, for a uniform scaling, any vector would be an eigenvector. In this second example, we're going to look at rotation. In the previous video, we applied a small rotation, and we found that it had no eigenvectors. However, there is one case of non-zero pure rotation which does have at least some eigenvectors, and that is 180 degrees. As you can see, the three eigenvectors are still laying on the same spans as before, but just pointing in the opposite direction. This means that once again, all vectors for this transform are eigenvectors, and they all have eigenvalues of minus one, which means that although the eigenvectors haven't changed length, they are all now pointing in the opposite direction. This third case, we're going to look at a combination of a horizontal shear and a vertical scaling, and it's slightly less obvious than some of the previous examples. Just like the pure shear case we saw previously, the green horizontal vector is an eigenvector and its eigenvalue is still one. However, despite the fact that neither of the two vectors shown are eigen, this transformation does have two eigenvectors. Here, I've now added the second eigenvector on to the image, and it shows us that although the concept is fairly straightforward, eigenvectors aren't always easy to spot. Let's now apply the inverse transform and watch our parallelogram go back to its original square. But this time, with our eigenvector visible. Hopefully, you're at least convinced that it is indeed an eigenvector as it stays on its own span. This problem is even tougher in three or more dimensions, and many of the uses of eigen theory in machine learning frame the system as being composed of hundreds of dimensions or more. So, clearly, we're going to need a more robust mathematical description of this concept to allow us to proceed. Before we do, let's just take a look at one quick example in 3D. Clearly, scaling and shear are all going to operate in much the same in 3D as they do in 2D. However, rotation does take on a neat new meaning. As you can see from the image, although both the pink and green vectors have changed direction, the orange vector has not moved. This means that the orange vector is an eigenvector, but it also tells us, as a physical interpretation, that if we find the eigenvector of a 3D rotation, it means we've also found the axis of rotation. In this video, we've covered a range of special cases, which I hope have prompted the questions in your mind about how we're going to go about writing a formal definition of an eigen-problem. And this is exactly what we're going to be discussing next time. See you then.