Hello, okay, so before we go further into methods of analysis for rate monotonic theory and practice, what I'd like to do is first review why we can't just use Linux as it is, right? So, you might be asking yourself that we're using Linux for the course and we're doing kind of strange things with it and that we're requiring you to use for all real time services, we're requiring you to emulate amp by using P thread affinity. Yeah, I'm telling you, you need to really stick to the real time subset of Posix features, namely the 103.1 features. And so, you might be asking yourself, why? What is it that really requires this? Well, one way to answer that is, let's show how fair schedulers, how traditional multi-user or desktop server systems, just by nature, without modification, don't work. So, one way to do this is to look at fair scheduling and compare fair scheduling to fixed priority rate monotonic. And show that it can't succeed where rate monotonic can. And so, we're going to do that with a series of three examples here. And there are certainly are more, but I think this will help to convince you. And I would ask you, while you're doing analysis, to try to come up with your own, as well, convince yourself. So, the first example we have here is fairly straightforward. We have three services. We actually have harmonic services. We can see that by the fact that there are multiple of the fundamental frequency. And, we have full utility. Actually, fair systems do well when they have a backlog of work. They can be used with full utility. There's no real problem with that, with a fair system. So, you might argue that that's an advantage of a fair system. And we'll also compare fair systems to dynamic priority systems, as well. But let's start out simple with fixed priority rate monotonic, for this scenario, and compare it to the fair schedule. And what we see here is that we've got a scheduling miss for the fair scheduler where rate monotonic succeeds. So, this is a case in point, for rate monotonic. So, why is that? Well, if we take a more detailed look at this, we recall how we fill in the rate monotonic schedule, we fill in the S1, Request for service and meet them completely. Every request period right off the top, right? Then we fill in S2 request for service with what remains, given what has been taken by S1, right? And, then we go on and we analyze S3 and so forth. And you're probably getting good at this as you do more and more of these yourself. It just takes practice. Pretty simple, right? And that's a nice feature of rate monotonic. It's deterministic, it's simple for harmonic cases or Slack. Stealing designs, it can be 100% utility. So, this is a nice feature. And we have the ability to guarantee that we can meet deadlines for everything that meets the exact feasibility testing of rate monotonic theory. And if there's any sort of overrun, the failures, let's say S3 overruns, the failures are going to be for S3 and everything lower. And that's not going to impact anything above, so it has nice failure modes. Determinism, it works up to a utility for harmonic services or slack stealing designs, as we've seen and will continue to see in further examples. And it works for this particular example is this case, whereas round-robin fair scheduling does not. Now let's take a pause here. Linux has something called the Completely Fair Scheduler. Before that it had something called the O(1) scheduler. So, it actually has a scheduler that's more sophisticated than round-robin. But, round-robin is the simplest kind of policy you can think of that is a fair schedule. So, all fair schedulers have some aspect of round-robin in them. They add more advanced features like priority to decay to make sure someone doesn't hog the CPU forever. They have priority levels that are, essentially, you get more time slices than someone else because of your priority. So, they have modifications round-robin, but they're essentially round-robin, which means that we cycle through S1 and we let it use the CPU for a while, then we go to S2, and then we go to S3. And then we come back and we just keep going around, and we give each one of these what we call a slice. And they could be equal slices, CPU slices, slice of CPU. It's kind of like slices of pie. Everyone gets some pie if we just go around the room and give everyone a slice until we run out of pie, right? Then, modifications are we could give some people, that claim to be more important, two slices [LAUGH] of pie instead of just one, and things like that. But, generally, this is a form of fair schedule. So, when we fill out the schedule here, anytime all three services want to run, we just give them to a time window in order, a time slice, essentially, a time quantum, we'll assume the time quantum is equal to one of our windows of time here in this analysis. And then we cycle back. So, we go down through them, and then we cycle back. And we go to S1, again. And it needs CPU, so we give it to it. And it turns out that S2 met all of its needs, so it didn't need anything. So we just go on. We skip it, because it's not requesting the CPU, and we go to S3. And then we cycle back. And we come to S1, again, S2, S3. We've got to go to 3 here, right? Because we're essentially doing this round-robin. And, here's where we cause a miss for S1, right? We actually dispatch S3, and we caused S1 to miss its deadline, okay? And so, another argument I'd make which is more basic by analogy, is fairness usually is not the best policy when there are mission-critical goals that must be achieved, right? So, for example in a hospital, it might be fair if you gave every patient an equal amount of time, right? But certainly we would understand that patients that have more critical conditions need more attention more often, and perhaps need attention until something is resolved, right? So that fits, that's a good analogy for rate monotonic, and that we are priority preemptive run to completion. And higher frequency service requests get higher priority. We'll continue to talk about whether frequency is the best way to determine priority, and there certainly are alternatives for real-time systems. But we can see that it does do better than round-robin. One of the most well known fair scheduling mechanisms. We could certainly do this for CFS, or O(1), or other types of Unix schedulers. But we would have a similar result. Because of the fundamental nature of the problem. So, let's look at another, perhaps, more challenging or just slightly more complex scenario. And let's go ahead and bring in some dynamic priorities, as well. I'm not going to explain the dynamic priorities in full here, because we have segments that explain how to do dynamic priority analysis. But, it was part of Lewin Leyland's paper, part of the deadline driven real-time scheduling policies. And we see that all three of these succeed and round-robin fails. [LAUGH] So we just make the point that the real-time policies are definitely better for real-time than non real-time fair policies, right? And we're showing this by example, but I would appeal back to the Lewin Leyland paper, and subsequent papers that we read in this class, for evidence that there are proofs that these are superior, as well, in terms of policy, methods of analysis, and the underlying theory. But I think it's nice to see this by example. So we have RM scheduler, again, we fill it in a cross. We always start with the highest priority first, and so that's easy. And then we fill in with what's leftover for S2, S3, and S4. And we've got something here that's, again, 100% utilized. What we see here that's kind of interesting, a little bit of a foreshadowing of what we're going to dig into more next, which is dynamic priority. Well, we see that the dynamic priority policies really result in the same schedule, [LAUGH] and yet they're harder to analyze. I'll leave it at that, I'm going to go ahead and hide them here for now, but I just wanted to show you that they are also real-time alternatives, and they certainly work even where the fair scheduling fails. So the real time policies in theory works better than non real time, [LAUGH] is the point we're trying to make your. So round robin remember, we've gotta just go S1 to S2 to S3 to S4, and then we gotta wrap around and come back. And we've gotta do that on some sort of interrupt preemptive basis, and make sure everyone gets a slice of pie. So I'll draw a little piece of pie here for [LAUGH] each service right? And so they get this so called time slice. And then we give him another one, if we like S2 more, we might give it two slices every time instead of one, right? Or we want to make it higher priority so there's little wrinkles like that, but generally speaking, we're going to go around and make sure all of them get service that none of them starve. [LAUGH] So starve is a good term, especially when we're serving slices of pie. So we go between a man we just kind of waterfall through them, regardless of their deadlines and we get an immediate miss here in this scenario. Because the more services we have, we're going to just basically chain down through them and were much more likely to cause a miss on one. So the more services we add were just going to get our misses even sooner, [LAUGH] so that I think is really kind of confirming that, fair methods aren't good for real time, right? The more we have, we're going to get deadly misses sooner, and so it's more certain and it's also going to occur sooner, okay? So, let's look at just one more in case you're not convinced yet with round robin. Round robin or fair schedules do have some real advantages, services don't starve, and it circumvents a problem called unbounded priority inversion. There really can't be unbounded priority inversion with fair schedulers, for reasons that we'll discuss when we get to unbounded priority inversion. But here again we see especially when we have these deep kind of large numbers of services, we get a miss right away, right? So in fact, I think this actually is this the same example, so we got 10, 4, 2 and 1, and we've got nope, that's actually different so 8, 4, 2 and 1, so slightly different, but you get the point I think. That fair leads to misses, and the more services we have, the sooner we're going to miss and with higher certainty. So round robin can succeed, certainly for systems with less loading with fewer numbers of services, it is possible for round robin to succeed. So best effort can work for basically, systems that don't have any deadlines [LAUGH] for sure. Could they work for soft real time? Well, it would be risky because it would be hard to quantify how often you would miss deadlines and you might miss them right away as services are added. So they can work for best effort systems, and even for what are called best effort interactive systems, things like games, but even for interactive systems, it's very likely that we would benefit at least from a soft real-time approach. Perhaps we don't need rate monotonic, well, what would that soft real time approach be? Well let's go back here, and if we unhide, are dynamic priorities, that could be our answer basically, EDF for LF, so we'll leave that for a future discussion. And on that note, thank you very much.