CMC we're Final Cut rate, my time fixed priority more complex example. Hello so in this segment we're going to look at a much more complex rate monotonic fixed priority scheduling example, and we'll walk through it, this is completely filled out so I'll just annotate what's going on and describe it. You can see the slack time indicated by the gray boxes, each of the services is indicated by a blue, green or orange box, again, remember each one of these columns is a unit of time, the exact units are irrelevant we can just say it's milliseconds that works out pretty well. The services now are abstract so we aren't saying exactly what they are. They could be anything such as a machine vision, computation, a flight control system computation, right, so we're just dealing with theory at this point where we're trying to answer questions of, does this schedule work? Is it feasible? Right so, let's take a look at some of the parameters here and just walk through this and I'll be working behind the schedule as I talk about it, so. What we see first of all is that we have three services, so and you can see this gets somewhat complicated. The LCM here, I'll just circle it is a 30 now, so, at 30, we can see this starts to become a somewhat complicated thing to diagram by hand, right? So, this is at some point you want to start using an automated tool like cheddar, which implements worst case analysis with algorithms like scheduling point completion tests, which we talked about in future segments. But this helps us to understand what's going on in those algorithms. And remember, we're playing the part of the scheduler, so we just have to pretend that we're the rate monotonic priority pre-emptive scheduler, and that we're deciding who comes out of the ready queue which service and gets a CPU core on a given amp core. And if a service of higher priority arrives or becomes available through a new request through something like an interrupt, if they're going to be preempted and interfered with, right? So we're going to look over this LCM of 30, and see if this schedule is feasible. And we see that the actual time required is 73.33% here, and it's below the lab. So the lab would have said this was feasible and have indicated that it was safe to that there's some margin, and so the lab would not have steered us wrong here either, right? So, let's walk through this and the easiest thing to do is start by looking at S1. So with S1 we see as expected that as the highest priority service in the system, because it has the highest frequency of 0.5 here, and the shortest period of two, that it would run first from the critical incident which is indicated by the red line. And in this case, we can just fill this in every periods of two all the way across in the first slot, so like, we showed before, that's always the easy thing to fill in, we can fill that all the way out to the LCM. So we expect it to run at 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29. Right and we're done with that, that's one down, so another interesting thing to point out maybe at this point, high level observation is that these services are not harmonic, how do I know that? Well, if I look at the frequencies, the frequencies are not just a simple multiple of the fundamental, okay? Frequencies, so that's an important thing to note, I have the f0 multiples indicated up there, right? So, therefore, we wouldn't expect to be able to get full utility out of a system like this, we would expect to by necessity have some slack time, and so let's go ahead and look at S2. So, the next priority service in the system so we know that we can't use the first slot because that's used by S1 at its higher priority. So we just fill it in at the next available point in time, which is right here, right? And if we look at what it needs the capacity that S2 requires is just one, so it would be done, and it has a fairly lengthy period of 10, so it's done for some time. So we've taken care of S2, now and let's move on to S3, S3 has a period of 15, this is what gives us the LCM of 30, the LCM of 2, 10, and 15. And we see that for S3, our first opportunity to be able to get the CPU is that four, and so we just fill that in here, right? But we're going to get preempted by S1 in the fifth slot, time slot, and that's life, [LAUGH] right? So we are interfered with, and we don't get to keep running, so, we have to run S1, by the rate monotonic policy at time slot five, so we just do that. Well then it's done, so that frees up slot six, and we just have to get back on to completing S3 at that point in time. Well, the good news is, that's all that S3 needed, so once it completes capacity of two, it is done, and we see we go back to S1 again, and then at that point, neither s two s three need the CPU and s one doesn't either, so we have our first slack period. So, slack periods are when we either have margin just for safety, or we could actually run some other best effort kind of services during this time, we'll talk about what we can do with slack time. There's a lot of great things we can do slack time, actually. So it's never to worry about having slack time, that just means you have more than you need. So then we, of course, continue to run S1, we have slack time again, S1, and then at this point, you'll notice that we've now gone beyond time slot 10, so we're back to a new request for S2, right? So, since S1 doesn't need the CPU at time slot 12, S2 actually gets to run just one unit of time after it's re-requested and it would complete again. So then we'd run S1, we'd have slack time S1, and then we get down here to 15, we actually now have a new request way down here for S3, so here we go. S3 is requested here again, and we can run it right away, we have interference to S3 by S1, we gotta take care of that, we don't have a choice. And so it gets preempted S1 completes, and we can finish up S3 here, it only needs two units of capacity, and now it's done all the way out to 30, right? It's second major period, and but S1 months the processor again we give it to it, nobody wants it we got slack time S1 wants it, we give that to S1 by its priority. And at at the next point in time, we see that we have a third request at 20, for S2. Well it had to wait because as one was higher priority, you already had the CPU core, but now it's able to run for that third request that it's making, finally at 22. So we'll just draw that note that there, then we run S1 again, then we have nothing but slack for a while out to the LCM, so, we can count up all the last observations, we can count up all of the slack, time slots. So we could say, here's one here, here's a second one over here, two here's a third one here, here's a fourth one down here whoops, Here's a fifth one over here, Sixth, Seventh, and eighth. I feel like I'm behind on my schedule, that's a joke. [LAUGH] So we have eight an unused time slots out of 30, that should be our slack time of 26.67, I'll leave that for you to check. And we've used 22 out of 30, which should be our utilization of 73.33%. Okay, so we have of course the rate monotonic lab as a math model for this, w have a tool called chatter, which does worst case analysis, it could use scheduled pointer completion test, which we learn to implement and analyze ourselves. Remember that the exact solution using worst case analysis is order of n cubed, so on the order of n is the number of services, so it's not too bad for three services, right? [LAUGH] We've just done this by hand so clearly, it's not a ridiculously complex algorithm for a small amount of services. But if it was 100 services, now we're talking about quite a bit of computation, and it would be almost impossible to draw, right? So the lab is order n, right because we simply sum up the C sub i's over the T sub i's, T sub i's, and check to see if they're less than the M times 2 to the 1 over m minus 1 fixed bound, which in this case, the lab happens to be 77%. So, for three services so everything checks out, there is a more complex example once again, stuck behind my equations in my schedule, and we'll pick up on even more interesting schedules as we forward in the segments coming up, thank you very much.