Dear participants, welcome to our lecture devoted to matrices. These are the keywords that we will cover during the lecture, starting from discussion of vector space of matrices, Frobenius norm and matrices, and go up to special classes of matrices such as symmetric or orthogonal matrices. First we discuss vector space of matrices. Matrix is a table with n rows and m columns, and the entries of this table are the numbers, and the dimension of matrix is the number of rows times number of columns. You can see examples of matrices. First example, is an example of a matrix of dimension 3 times 2. The entries of this matrix are integers and we call this an integer matrix. Second example, is an example of the matrix of dimension 2 by 4. Entries of this matrix are real numbers and this is a real matrix. Important notion for the matrix is sparsity. Sparse matrix is a matrix where the majority of entries are equal to zero. This is example of a sparse matrix of dimension 3 over 6, and there's a number of non zero entries elements of this matrix is 6. A matrix is used in image processing in a natural way. You see two examples of the matrix associated with the digits in a grid of dimension 7 by 3. First matrix represents the intensity value of pixels in this grid. The same is for the second matrix. This is another representation of digits. We are already familiar with vector representations of images. This is a slightly different matrix representation of digits. In 3D representation of images, you need a 3D grid of intensity value of pixels, and this can be associated with a tensor of dimension, in general, n times m times k. Mathematical notations: R is a set of real numbers, and R power n times m is a space of real matrices of dimension n times m. For two matrices from this space, two operations are defined, multiplication by a scalar and the addition of two matrices. In general, one can define a linear combination of matrices, and this definition is given on the slide as an example to show how it works. You'll see the linear combination of two matrices of dimension 3 times 2, and we get the matrix of the same dimension. These two operations allow us to consider a space of matrix as a vector space. First in a vector space, we need the zero element, and this zero element in the space of matrix is the matrix with all elements equal to zero. For such matrix, A plus zero is equal to A. Another important notion in a vector space is inverse element. The matrix minus A is defined is the by entries with a minus sign, and the property of this minus A matrix is that minus A plus A is equal to zero matrix. We can know the dimension of this space, the space of matrix of dimension, n times m is equal to n multiplied by m. To prove this, you can consider the two following problems. Problem 1: First, you need to check all axioms of the vector space for the space of matrix. Problem 2: find a basis in this space, such as a vector space, and prove the formula for dimension of the space of matrix. The simple hint which is useful in general, is the following: any matrix can be considered as a vector, and this vector is obtained by a row raster representation of the matrix. In any vector space, we can try to define a norm in this space. A popular norm in the space of a matrix is a Frobenius norm. This norm is defined by the given formula. It is named after a German mathematician, George Frobenius, known for the famous theorem about stochastic matrices. I suggest you prove as a problem that Frobenius norm satisfies all norm axioms. The same hint can be useful, in this case you replace matrix by vector row raster representation of matrix, and Frobenius norm in this case became Euclidean norm of the vector. This is an example of calculation of Frobenius norm of two digits in the grid, the 7 by 3. You can see that the Frobenius norm for the digit 8, for the matrix associated with the digit 8 is the square root of 17, and Frobenius norm of the matrix associated with the digit 4 is square root of 12. You see also vectors associated with these two matrices, and the Euclidean norm of these vectors are the same as Frobenius norms of matrices.