[MUSIC] So finally we are at a point where the maximum cycle mean has appeared in the formulas. That's very nice. We only need one more step to prove that the limit for n over the worst case response time for n tokens is bigger than one over the maximum cycle mean or the bound on the throughput on the input. And that means that at this point, we need to start thinking about that T, about the bound on the throughput of the input. Let's make that one our first focus of attention. So I propose to just assume that we have an input, And that that input has a throughput bound that is comparable to the throughput bound that we've been trying to prove for the output all the time. So notice that this is an input bound, so that the limit for m to infinity of n over the worst case time at which the n's token comes in is bigger than T input. Let's assume that we have this. Why not assume a general throughput bound? Because hey, basically a throughput just meant we have the number of tokens at any point of execution divided by the number of, the amount of time that came out. Why use a bound like this? Well, basically because it is a bit of a stronger assumption than having an ordinary throughput bound, and for the weaker assumption, I simply can not do it. So we start from this one. Okay, what does that mean? Well, as a definition, it means that for every epsilon bigger than zero, there exists an m such that for all l bigger than m, if we do l divided by the worst case processing time, or the worst case arrival time at the input, then we get the value bigger than T input minus epsilon, right? This is just filling in the definition of this throughput bound on the input. Now if we rewrite that a little bit, we can just turn around the fractions, and we get that the input is smaller, or we also turn out bigger than to the smaller than sine of course than l divided by T input minus epsilon. And if we now combine this with our formula, we know that p(n) is smaller than something that depends on the input, then we get a much more complicated expression saying that for every epsilon there is an m, such that for all l bigger than m, pl is smaller than the maximum of. Well here is this expression. For the input again, l- k. Remember, k is a value smaller than l + I. Before it was n, but of course, we are now looking at p(l). Plus, well, k times the maximum cycle mean plus l. Okay. So we can simplify that a little bit. Oh, no wait. Okay, and now we can see that there are actually only two cases that are relevant. We are taking the maximum over all possible values of k smaller than l plus I here, right? But this is kind of a linear connection between the two. If I take k to be a higher value, then this part becomes bigger. And this part becomes smaller. And if I take k to be a lower value, then the influence of the input becomes bigger, and the influence of the MCM becomes smaller. And so it is easy to see that we actually have two cases. One in which we take k to be zero, and then it just says l divided by T input minus epsilon. And the other case is where we take k equal to l plus I, and then we get l plus I times the maximum cycle min. And in both cases of course, we have the plus l still to go. So then we get this for every epsilon bigger than zero. There exists an m such that for all l larger than m, p(l) is smaller than the maximum of l over T input minus epsilon. I actually forgot a plus l here. That may haunt me a little bit later on, but we'll see what happens. It is the maximum of that and the other term is l + I times the maximum cycle mean plus the large L again. Right now we can left and right divide by l, and then we get rid of that. And, oh, so we get this, right? We've divided by l here, and we divide by l there. And then here the term becomes plus L divided by small l. I'm going to get back to this term that I forgot, sorry about that. But, first this one. If we look at the left side here, then we're dividing by L. And if we take L sufficiently large, then we can find that this term is smaller than this term. I derived that by some magic called rule from alpha. It always helps if you have to do these nifty calculations. And if you start from that, well you can start from the left side here, this one, is that one right? If it's smaller than this, then with some additional calculations, it's smaller than that and smaller than this and it's also smaller than this block. So then we can proceed and find this one. Well here we still have the plus L divided by small l. But on this side it has been cleaned up. Now, I made a mistake in the slides and I cannot really reproduce this video at a later point, so what I'm now going to do is a little bit of a proof by intimidation. I'm going to say if you take l big enough, and you have this L divided by l, then if you take l big enough, then this is approximately zero, right? So that means in a limit calculation, and it is actually allowed here, but the details are a bit iffy, in a limit calculation, you may say that this term actually disappears. And when we've made that term disappear, then it's quite clear that if we turn the fractions around and we turn also this bigger than sign around and the maximum becomes the minimum, then we've found that L divided by p(l) is larger than the minimum of T input and one over the MCM minus epsilon. And that is exactly what we were looking for. So now, finally, we've proven that the limit for m, going to infinity of n over p(n), is larger than the famous bond. And now, we have our theorem. Come on. Let's have another cup. [MUSIC]