Okay, what we would do here are show you two things. We have two objectives for this video. One is to show you how to draw a hand drawn hand analysis schedule for rate monotonic theory, as well as show you the rate monotonic policy by example, how it works. In subsequent courses, video segments will show you that it's also optimal, but right now we just want to show you how it works. So remember that when we have services, in this case we have three, S1, S2 S3. We need to be given the T, C, and D for each one of those. We can assume D equals T. So we're given periods, and we're given the computational requirements, the worst case execution time, C1, C2, C3. And from that we can compute the frequency one over the period. And with the fundamental frequency will be the longest period. And any higher frequencies are a multiple of the fundamental frequency. And the highest multiple, the fundamental, is the highest priority, okay? So we would see that would be S1, followed by S2, followed by S3, which is the fundamental. Notice also that they're not whole number multiples of the fundamental, therefore they're not harmonic. And we can compute the utility for each one, which is simply the C over the T. And we see that we use 73.33% here, less than the rate monotonic least upper bound for three services. Also far less than 100% of the available CPU resource. Therefore, we have 26.67% of the CPU as slack time. You can consider slack time to be margin relative to 100%. You can see that we even have some margin relative to the least upper bound. So, now let's show what happens when we apply the rate monotonic policy, which simply says the highest frequency service gets the highest priority. They are dispatched according to priority out of the ready queue. And they run until they are completed, or until they are interfered with by the availability of a service with higher priority. And so, let's see how that works. First of all, I should point out some features of the timing diagram. The red line is what we call the critical incident. So from a worst case perspective, we assume that all three services might be ready and wanting the CPU at the beginning of time, at the critical incident, that would be a worst case. So, if that's the case, we know S1 will win, because it's the highest priority. And we let it run to completion. Well, it only needs one unit of time. So it's done in this first window, okay? Then it yields the CPU when it's done. And it doesn't want it again until time window three, so S2 can run. And it only needs one unit of time, so it therefore completes. S1 runs again, it doesn't want the CPU after it completes there. And S2 is done and hasn't been re requested, it doesn't get re requested until time window 11. And so S3 can run, but it can't run to completion, because we need to run S1 according to rate monotonic policy, it should preempt S3, and it does. And in fact then S3 can finally complete kind of split into two parts during time windows six. And then S1 runs again. In fact, you see when we're drawing these, we can immediately draw S1 in. That's the easy part to draw in that, just every S1 one period, we just draw in what it uses right away right at the left hand side of its request period. And it leaves holes essentially of unused CPU that we fill in then with S2, right? And then whatever S2 doesn't use is available for S3, and then whatever S3 doesn't use become slack time or margin, right? Nobody needs a CPU at that point, time scheduler would spin, so called idle. And it would just pull the ready queue for work and there wouldn't be any work. So it would just continue to spin and look for work, not finding any. Interestingly enough, over the longest period, which is what LA Hoxie Sean didn't say is necessary to test the feasibility, the exact feasibility of a rate monotonic scheduler. We see that we have three unused time windows out of 15. That would indicate that 20% aren't used, so why is utility not 80%? So, the other thing that's true about exact analysis is we actually need to look over the LCM to fully explain the schedule. Not to determine whether it's feasible, but in order to fully describe it. So if we scroll to the right, and we continue to fill this in, as we did before, out to 30, which is the LCM of the three periods to 10 and 15. Then we see that we just fill in S1 as before, right at the beginning of every request period that it has. We fill in either S2 or S3 whenever S1 isn't using the CPU. In fact, here, the only reason we don't have S2 scheduled here is because it was done, it got done earlier back here. So we let S3 use it, it gets preempted, we let S3 finish, we schedule S1 then nobody wants the CPU here. So we have more slack, we schedule S1. We have S2 re requested here at 21. But it has to wait for S1 to complete, then it executes. So we have one, two, three, four, five more units of slack. So we have eight time windows have slack over 30 which explains the 26.67% slack overall. So we've seen a couple of things here, we've seen how to draw schedule by hand. We'll see how to automate that in the future. So that if we had a lot of surfaces, like 20 or 30 services, we won't have to do this by hand. It would be a little untenable for human analysis if we had a large number of services. And [COUGH] we also have seen how the rate monotonic policy works, right? What it is, highest priority is given to the highest frequency service in the set of services that share a CPU in the AMP architecture and how it works, right? So, we essentially have done that by pretending that we're the scheduler, we're the dispatcher. And we see the preemptions, we see interference by one service to another, namely a higher interference by a higher priority service to a lower priority service. And we also see run to completion. So, that concludes this basic method of analysis and explanation of the rate monotonic policy by example.