Hello. What we want to talk about now is why you might want to use EDF, what the advantages that it has, and when to use it. There are some key advantages of EDF. EDF was basically described as deadline-driven scheduling by Liu and Layland in the original paper, that they wrote that we've read. We've read in course one, if you took Course 1 in this series, and we're rereading in Course 2 for more depth and more in-depth coverage of the theory and methods of analysis. Let's look at this in detail. I've got an example here and I've got three services. You'll notice that the periods for these services are such that they are not harmonic. However, we have a 100 percent utility. This is a scenario very often where you would expect rate monotonic fixed priority to fail. We're well above the lub, and we know that while rate monotonic can handle 100 percent scenarios, it can only do that in scenarios where services are harmonic or we've engineered a very specific slack stealer to just use up whatever is leftover, and we saw that in previous examples, for rate monotonic. Here we've got a case where we've got a 100 percent non-harmonic and this is going to defeat fixed priority. It's not going to work, the good news is the lub says it won't work. So it's not really a problem with that, and if we perform exact analysis, we see that it doesn't work as well. Let's go through that real quick, you remember how to do this? We simply schedule all the high priority high frequency requests for S1 right off the top, and then we start filling in S2 with whatever is leftover. S2 needs two windows of time, so it takes those here and its first major period, and S3 luckily, has some time to execute and it needs three units of time, but it only gets two done there. Because of fixed priorities, S3 and its next major period is going to go ahead and execute twice using both of those unused windows of time. That's going to push the last unit of S3 needed over beyond its deadline. I put question marks here because the question is, was that wise to have scheduled S2 there twice in a row? And I think as humans we would all say, No because we had a more urgent deadline with S3. The key is EDF has a key advantage in that it uses time to deadline, to encode what we call urgency, which we all know as humans. If we have a deadline coming up and it's soon, then we would give that highest priority, right? EDF, it does this as we know by encoding the time to deadline every time the ready queue changes and adjusting priorities accordingly. We have to fill out all of these time to deadline urgency metrics in every window of time in our analysis. If we do that, we see the schedule roughly looks the same until we get to this key point here, which is a point where we really need to make different decisions than the fixed priorities would cause us to make. Instead of executing S2, what we see is here that we've got our urgency is much higher for S3, right? Than it is for S2, because S2, we have more time out here to the right of this gold line that I have that shows the major period for S3. And so because of this urgency metric in EDF, we're going to go ahead and execute a third instance of S3, and we're going to complete S3 here. That's a good thing before its deadline. Then we're going to get back to work on S2, and that's fine. Because we've got more time for completion of S2. We're also going to talk about Least Laxity First, which also encouraged in codes urgency, but it's more complicated. Advantages of EDF, well, one, it encodes this idea of urgency and that is actually an optimal, it's been shown to be an optimal strategy. So it's optimal. Essentially Liu and Layland provided a proof. There have been subsequence verification of that proof that dynamic priorities essentially can schedule any scenario, where you're not using more than 100 percent of your CPU resource. We can think of it as, we've got flexibility or adaptability now to change the schedule based on urgency. It's almost like in our analogy of packing the suitcase now, we can change the sizes of the boxes that we're packing into our suitcase, so they do always fit. We do so with this metric of urgency. The second thing I would say that's an advantage of EDF is it's not easy to compute these numbers, it is simpler than other options, so it's simpler than LLF or another alternative we'll talk about called ELLF. LLF is least laxity first, sometimes it's called least slack as well. It's simple, it's optimal and it works in cases where fixed priority rate-monotonic does not. Those are two key reasons to use EDF. It's often used for soft real-time systems where you want to make sure you get full utility out of your equipment. For example, if you're Netflix, Hulu, someone who provides streaming services, you want to get 100 percent use of your data centers, sign up as many customers as you can and stream to them. It's reasonable to have zero margin because you can collect more revenue the closer you get to zero margin. If there is a glitch, nobody's going to die, there will be a loss of quality of service, worst-case, you might lose some customers. But most likely they'll tolerate an occasional glitch in quality of service and you'll be able to optimize your business. It's pretty clear that things like EDF, dynamic priorities, deadline driven scheduling is useful for soft real-time. There's no real debate about that. Most of the debate is about, should it be used for hard real time? I think that's something you have to make a judgment call on. This course will give you all of the theory and methods of analysis for you to make a good decision there. But the concern would be that you're essentially at zero margin. The question is what happens in failure modes which we'll talk about. Those are the main reasons not to use EDF. But here we're talking about why you would use EDF. Turns out there's another reason that we might want to use EDF. We've got a second example here. In this example, what I've got is a sporadic service. By sporadic, what I mean is, on average it has a period of 15, but it alternates between 12 and 20. This is something you could definitely encounter in the real world. In other words, it depends on the exact request, the context of the request as to how short or long the deadline is, or the period. We'll assume deadline equals period here. On average, the loading is set so it's not greater than 100 percent. But we have these changeable periods so we call that sporadic, aperiodic would mean it has no period at all. Sporadic means it has a period but it changes. There's some variability in that and, this could become more extreme, it could be more than just two along the shore. There could be short, medium and long or any number and they could average out. To keep things simple, I went with a short and a long and we see basically here we have the long first. We could look at the short first as well. I have that case for you as well. Actually, let's go ahead and look at that because I think that's a little quicker because we will see the failure sooner in rate-monotonic. In that scenario, we have the same problem with rate-monotonic in that because it's not adaptive. We're going to get a miss here. We're just going to schedule these right off the top. As always, the question is, was that a good decision there? We're going to wind up missing S3, and S3 needs four units of time actually. It was a substantial miss there. What we see is that EDF can handle this, no problem. It can actually get all four time windows done for S3. It does so using urgency once again. So we just compute the urgency. You just fill in here, we've gone over this as a tutorial, but you just fill in the time to the deadline from the left edge to the right for each one of these. They basically countdown until you satisfy the period for that service. What we see is we have urgency increasing for S3 as we go across here. S2, we have urgency increasing as well. They don't care when it's needs are met here. S1 is fairly urgent because it has a closer deadline than S3 here, so it does execute. But over here, we execute S3 with highest urgency. We did that just because we had time available up here in RM at fixed priority. What happened in RM here is that by fixed priority, as soon as S2 became available, we had to execute it because of the fixed priorities. But in the case of EDF, the urgency continued to go up for S3 and so we executed S3 here instead, which turns out it's a good thing because its deadline is coming up. I often describe this as what humans or students do with deadlines. As a deadline approaches, we work on at harder. We all know that that's a good heuristic. It turns out LLF is slightly more sophisticated, but that's not a bad one, to say the deadline's coming up so I'm going to give priority whatever has a deadline coming up. We do that in our daily life. We execute S3 instead of S2, and we defer execution of S2. In fact, we do that again here, even when S1 is available. We defer S1 and S2, two services that were fixed higher priority than S3 for rate monotonic. The good news is, we get S3 done before it's deadline, even one unit before it's deadline. Then we can go ahead and catch up on our S1s, which now have the highest urgency because the deadline's coming up before S2 and we do just fine. This was for the shorter deadline for S3. Now, this example assumes that we know which deadline we're working with, the 12 and the 20. If we didn't know that, that would be a little bit different scenario. The assumption with EDF is that we can, we can estimate time to deadline or compute it better than estimate it. That can be known anytime there's a change to the ready queue and anytime there's a request, we would know the time to deadline. This could be a challenge, but assuming we can do that, this has cleared bandage.I should point out that scared deadline has been added to Linux recently. You can actually try this out now with Linux. It's so new, I haven't had time to spend with it yet myself, but, undoubtedly, I'll be adding some updated video to this Coursera as I gain more experience with it, but I would encourage you to try it out. It is a new policy that you could use instead of SCHED [inaudible] You could try out the schedule with SCHED deadline and see if it can in fact adapt with SCHED deadline. That's a clear advantage. When the short version of S or the short period for S3 occurs first, we get a miss right away in S3. In the long example, we don't get the miss right away, but we're going to get it whenever the short period occurs. Again, this is really advantageous for anyone who doesn't need to have margin required for mission critical systems, for soft real-time systems, because we can adapt to sporadic service requests. We can adapt for non-harmonic 100 percent workloads, and it's optimal. It's simpler than LLF, which we're going to look at next. Here's LLF, which has the same basic characteristics and in fact, is a more sophisticated, intelligent heuristic than just urgency is timed to deadline. One of the key advantages of EDF is actually time to deadline. If you're going to have to compute something dynamically, which is going to be harder than fixed one-time assessment of what the priority should be. Then the simpler the heuristic, the easier that's going to be. It's going to be easier to do that in a scheduler online, right? SCHED deadline in Linux implements EDF because, it is possible, it'd be one way to say it. LLF is going to be much more challenging. In summary, EDF is simpler than LLF and it's optimal. Next, we'll discuss why you might not want to use EDF. Take a look at these examples and come to your own conclusion as to when to use EDF. Our recommendation is EDF is great for soft real-time. Be careful with it for hard real-time for reasons that we're going to discuss next in our segment on why not to use EDF. Thank you very much.