In this section, we'll learn how to quickly compute characteristic polynomial of a given matrix. Quickly, means in a polylogarithmic time using circuits or parallel computer with polynomial number of processors. We'll not consider a single threaded model a standard DRAM model, we'll consider a parallel DRAM model or Boolean circuits. We'll assume that we're given a matrix where the entries in this matrix are elements of some field, maybe these are residues module or some prime, maybe these are rationals. Anyway, we can encode them in binary form. This is matrix of order n. The characteristic polynomial of the matrix which we'll denote by Chi_A of x is the determinant of x times identity matrix minus A. If we write that explicitly, we can see the following thing. Now, if we expand these determinants into a polynomial, we see that the largest power of x is n. We actually obtain it when we get the product on this transversal, the main diagonal of the matrix. Out of each of these terms, we get x, so thus we get x_n with coefficient one. The coefficient of the power n minus 1 of x would be the following. We again, need to take all these guys, but now, out of the product of these guys, out of all but one differences, we take x, and of one difference, we take minus a_ii. Ultimately, the coefficient which we'll gather near this guy is minus the sum of all the diagonal elements of matrix A, which is called the trace of the matrix A. But for now, we do not really need that. We'll just say that it's minus some coefficient s_1. Well, we can note that it's the trace of A and so on. Lets denote the coefficients as plus S_2 x_n minus 2 minus S_3 x_n minus 3 and so on. The last coefficient will be minus 1_n times S_n. We see that this guy, the free term, is the value of the polynomial at zero. If we plug x equal to 0 in here, we get just minus A. The determinant of minus A is minus 1_n times the determinant of A, so we know that this coefficient S_n is actually the determinant of A. In many applications of matrices, to say problems in graph theory, computing matrix determinants, inverse matrices, and other things like that is very important. We'll learn how to compute all the coefficients of this characteristic polynomial quickly. In particular, we'll learn how to compute the determinant of the matrix itself quickly as well. The algorithm would work for arbitrary field beads or the field of rationals or any finite field, and we'll now proceed with the algorithm. It would be convenient for us to deal with a different polynomial not with the characteristic polynomial of a matrix, but rather with a polynomial x_n, Chi of A of 1 over x. That is x_n, the determinants of 1 over x times identity matrix minus A. When we get x_n into the determinant here, the thing under determinant is just multiplied by x, so this is the determinant of identity matrix minus x times A. We'll deal with this guy, but when we actually expand this guy according to this form of the polynomial, we see that it's nothing but 1 minus S_1 x plus S_2 x squared, and so on, plus minus 1_n, S_n x_n. We'll actually compute this guy, but anyway, its coefficients are just the same as the coefficients of the characteristic polynomial of A. We'll now denote the submatrices of A standing in these corners as A_1 that's just one-by-one matrix, A_2 and so on, where A_n is the matrix A itself. We are interested now in computing these determinants, which has just the same coefficients as the characteristic polynomial of A. We'll denote this matrix by B. Just like we have introduced the submatrices of A, we'll introduce the submatrices of B. This would be symbolic matrices B_1, B_2, up to B_n. Now, let's recall the Cramer's rule, that whenever we have a square matrix C, the elements of the inverse at the position i and j is equal to minus 1_i plus j times the ratio of the determinant of C_ij bar divided by the determinant of C, where the determinant of C_ij bar is actually the determinant of the matrix, which is obtained from the matrix C by striking out the ith row and the jth column. This is the determinant of an n minus 1 by n minus 1 matrix. Each element of the inverse of C for any matrix C can be computed in this way. Now, we see that in this notation, for every m, we have B_m11 bar. That is the matrix obtained from B_m by striking out the first row and the first column, is actually just B_m minus 1. Cramer's rule gives us that the elements of the matrix inverse to B_m in position 1,1 is the ratio of the determinant, so is the determinant of this guy, which is just B_m minus 1 divided by the determinant of B_m. Now, that might seem a bit weird because we'll probably have studied Cramer's rule for numeric matrices. All these matrices B_m minus 1, B_m, they are symbolic matrices because B is defined like this. The entries of matrix B are expressions of a variable x. But really, that's fine if we just treat all these entries of matrix B and of all other matrices that will further have as elements or the field of rational functions over F. If our matrix A is a matrix with elements of some field of order n by n, then the elements of A, and the elements of B, and the determinants of these matrices, they all can be considered elements of the field of rational functions over F. What's that guy? The field of rational functions, which we can denote by F of x is just the set of all the formal ratios of form p of x over q of x, where p and q are polynomials of x, and the polynomial that we have in the denominator is not identically zero. This is very close to how we construct rational numbers. Any rational number can be defined as a ratio of two integers. But when we define rational numbers, we treat some of the ratios as equivalent. That is if one ratio gives the same proportion as the other one, then these are actually the same rational number. The same thing we have for a definition of rational functions. If we have two rational functions, p of x over q of x and p hat of x over q hat of x, we treat them as indistinguishable if the proportion is the same, if the product of the polynomials p times q hat is equal to the product of the polynomials p hat times q. We clearly can define easily the standard arithmetic operations in this field of rational functions, just as we define the arithmetic operations on rational numbers. Now, if this guy is an element of F of x, and this guy is an element of F of x, then the ratio of them is also an element of the same field. Cramer's rule can be considered even in case of symbolic matrices. Now, when we use this formula, we can obtain the following formula. If we multiply for M equal from one to n, we get the inverse matrix for B_n the elements at position 1,1 multiplied by the elements at position 1,1 in the inverse matrix for B_n minus 1 multiplied, etc, by the element at position 1,1 of the inverse matrix for B_1. For the inverse matrix for B_1, that's the product of these ratios. This product would be telescoping because we have determinants of B_n minus 1 over the determinant of B_n multiplied by the determinant of B_n minus 2, by the determinants of B_n minus 1, and so on and so forth. Many things cancel out here. In the end, we get just one over determinant of B_m, and B_n is equal to b. Out of this formula, we see that the determinant of B, the polynomial whose coefficients we are searching for, can be computed symbolically as follows. If we take the inverse for both sides of this equation, we see that on the right side, we'll have determinant of B, and on the left side, we'll have this thing. We'll now use this formula to compute the determinant of B eventually.