Recently we have learned trust and sober rhythm for matrix multiplication in sub pubic time. And so how do we apply this knowledge off ability to multiply matrices quickly in other problems off computer science. Now this week will learn how it helps in some graph algorithms. Usually these algorithms, a related shirts, pets and we will learn two of them. Recall the stress and the algorithm allows us to multiply matrices in time, which is big goal off. Enter the power off, log to seven after stress and published his result. There was a chain off results and it still continues. And currently the best known time for matrix multiplication is big go of n the power off 2.38 roughly. So during the last decade, the improvement was only in the third sign after the decimal point and currently this is probably the best known results that we have. But here we confined parallelism, to multiplying integers. And if we are talking about integer, multiplication and beat complexity,. Then our first algorithm that we've learned was Carrozza Ben Dorfman and this algorithm had big go of into the power off lock to three complexity. Then we have studied storms algorithm or written by tournament. Later, it was simplified by cook that has complexity be go off and to the power of one plus Epsilon, where Absalom is constant, but it can be made arbitrarily small. So essentially we have almost linear time complexity for multiplication of integers. After Tome, the best known algorithm for several decades, was all written by Shaun Higgins Trust. This algorithm achieved complexity off big go off in Logan along Logan. And then there was chain of improvements. And to scientists Harvey and Wonder, who in were able to prove that integer multiplication can be carried out in time. Vigo off Logan for envied integers. So this is the ultimate bit complexity of integer multiplication now for matrices. As we see, the progress is much lower. So that's we still are, unable even to go as close to two as possible in terms of power off them. Which has been done by Tom and Cook long ago before the latest algorithm in 2019 was devised. And there's there currently is no ultimate knowledge for how close to we can get here. And clearly we cannot get closer to end squared here because just to read the input matrices. We have to read all the elements and the number of filaments is and squared for and buying matrices. So currently there is a nice notation here which is called the Matrix Multiplication exponents. So the matrix multiplication exponents is a constant which we still don't know such that matrix multiplication has an algorithm running in time. Big goal of end to the power of this exponents, plus a little off one. So this guy is denoted by omega. This has nothing to do with primitive roots off unity that we have used to device first fully transformed algorithm. Nothing to do with that. But this is a standard letter used for matrix multiplication exponents. Most of the researchers believe that this is actually to but currently to the best of our knowledge, we can just state that omega is less than 2.38. And as we learn, improving the matrix multiplication exponents will immediately improve some algorithms that are based on metrics multiplication in many areas of mathematics, including graph theory. So we'll just study application off matrix multiplication Currently for solving two problems off graph theory. Let's talk about these problems