One of the most beautiful things about this whole UP-Tree algorithm is the fact that running time becomes fantastic. Remember, as you're unioning two sets together, once we find the root node, we need to update only a single pointer. We can find the root node as quickly as possible because every time we do a fine algorithm, we are compressing the path to be smaller, and smaller, and smaller. We're nearly getting to that constant time run time. I really, really wish I can tell you that disjoint sets run in constant time. But that would be a little bit of a lie. Instead, the running time of a disjoint set is one of the best running times we ever see in computer science. The running time of this algorithm is something called the iterated log. Let's see what it means to be an iterated log. The iterated log function is denoted as a log star of n. So this is saying, this is an iterated log of n. An iterated log function says, it is zero if n is less than or equal to one, and one plus the iterated log of the log of the number for every value greater than one. What this means is, an iterated log is how many times you can take the log of a number. So let's do the iterated log of base two of two to 65,000 power. So an absolutely gigantic number. Two 65,000 power, the log of that is going to be the iterated log of 65,000. The iterated log of 65,000 is going to become the iterated log of 16. The iterated log of 16 is going to be the iterated log of four. The iterated log of four is going to be the iterated log of two, which is then going to become the iterated log of one, which is one itself. So what this means is we have this fantastic algorithm that only requires us to go one, two, three, four, five, six times through the iterated log when we have an absolute ginormous number. So the running time, the growth of this algorithm grows proportional to the number of times you can take the log of a given number. So two 65,000 is an absolute gigantic number. The iterated log of that is six. This is so, so, so close to constant time. We can't say it's quite of one because it does grow based on size of the input. For the growth of based on the size of the input is extremely small. So small that the input is nearly, nearly constant. What we actually are going to say to actually denote the entire running time is we know some operations take longer than other operations. So what we actually know is we know the total number of operations after a series of m, find, and union operations is going to be equal to the value of m times the iterated log of n. So m operations take m times iterated log of n time. Because this number is so small, because m operations time iterated log of m is so close to m, we're going to consider that the amortize average running time of our algorithm for all practical purposes is o of one amortized time. So when we see any of this used inside of a for loop, even though we know this is not constant time, what we're going to do is we're going to say that we're going to just use a constant time running time as the analysis of this algorithm. Because this constant time running time is so, so close to reality. The iterated log is so small that what we have is a algorithm that runs effectively constant time. We know a few are first journey take longer, so that's why we're going to say it's amortized constant time. We know that that's a slight lie. But as we build algorithms that have more and more complex data structure around it, we can just assume that this algorithm is a constant time algorithm. This is going to help us really understand exactly a power of how we can use disjoint sets to build really amazing algorithms in the future. So we'll start using this on graphs, which one of the last topics we're going to talk about in this class. So I look forward to introducing you graphs and doing some amazing things with graphs. So I'll see you then.