Real-time system design, hopefully final. No globally synchronized clock. Welcome back. There's a complicated diagram here clearly. If we're going straight to this, your thought [inaudible] even that looks way too complicated. But I think having the previous segments and seeing all the flaws in their design will bring you to something like this. So let's say, we don't get a global synchronized external clock. We just have to observe a physical process. Very likely we're flying over the Pacific Ocean, we can't rely a 100 percent on GPS or GNSS even perhaps, because we don't know if we'll always have it. We need some way to continue safe operation without relying on this global external clock. So we have to fly on our own. Of course, people did this for a long time before GPS. So we have our own onboard clock as we discussed before. We have a stable oscillator. If we're worried about it failing, we can make it redundant. So we can say times two. We'll get into how to build a redundant fault tolerant systems later in future segments. But just suffice it to say now, anything that could be a single point of failure, we can duplicate and then have logic to determine which version of the interval timer to trust. It'll be a big problem, but we'll defer that for now. So we can duplicate things. We can duplicate the entire system if we wanted to. So hold that thought. We'll make sure there are no SPOFs in our system. We wont have to rely at all on some external global system like a satellite system. We can just operate on our own, we can be autonomous. So we have a sequencer because we went everything here to be real-time predictable response. We don't want anything best effort. We will have one best effort service. It can't be best effort, it doesn't have to be. So the one that will be best effort is this one here, which is the frame right back for the observing log, because the log is looked at after the fact. It's not needed in real time in our scenario. So we have a real-time sequencer. We normally want to sequence everything at the highest rate in the system. So we read the Carlo paper on the shuttle primary avionic subsystem, that's common. We have a high-frequency executive that drives everything else at a sub rate of that highest frequency. It's very deterministic. It's well-proven, it fits well with rate-monotonic theory and analysis. We can do that with Linux, we can do that with an RTOS, we can do that with a cyclic executive. So we have the sequencer, let's say, it just runs at a 120 hertz. I picked a high number just to show that, it's irrelevant. Why wouldn't we just pick an even higher number? Well, at some point we get eaten alive with the overhead of the interrupts coming into this thing and this thing running just to say, hey, it might be time to do something. So we're not going to get crazy with this. But it's okay to have it run at a high rate, because it doesn't do much. If you go ahead and trace this, you'll see this doesn't steal a lot of CPU time. Hundreds of hertz is reasonable. When you get to thousands of hertz, that might be a little too much, when you go below a millisecond. So it post a binary SEM4 mod six. So every sixth time it runs, it posts a SEM4 that allows frame acquisition to run at 20 hertz. So it's a real time service, it runs at 20 hertz. It's driven by the sequencer, which is driven by a stable hardware oscillator, and it then acquires frames from the camera. By the way, with different cameras, we could have the camera raise the interrupting. The camera could essentially replace the oscillator. That's a judgment call, a system design judgment call. So I've worked on and built systems where it was either way. So it's just whether the camera's going to have a sense of time or whether you're going to have an external timer and then just do readouts from the camera. Most cameras have some form of time knowledge because they have to control their own integration time, read out and so on and so forth. There the camera itself is a state machine that's continually resetting, integrating photons with something like a charge- coupled detector or CMOS detector, and then transferring that information across something. In other case with USB, because we just have a webcam. Most webcams don't directly generate interrupts for you. You have to go and read data out of them. They may be generating interrupts and buffering in here at a lower level of detail, but essentially you're pulling the camera driven by our sequencer. So the designs I'm presenting also work with the low cost hardware you have that, you can readily lay your hands on. Certainly, I've worked on things like space telescopes where we had much larger budgets and we had much more sophisticated cameras and so on and so forth. So I just thought I'll explain that. This, you can build with the equipment you have, $20 camera or less, $50 microprocessor, etc. We run this at 20 hertz, we're oversampling for the one hertz rate, we're observing. Remember, this is a one hertz thing. Because it's not a stopwatch, it's just a ticker. The problem is, as we've said before, is we can catch it in the middle of ticking, which lasts about a 100 milliseconds for most clocks out of a second. You can definitely catch it in the middle of ticking, or we might look just a little too early or a little too late, and so we'll get a duplicate hand position, or we'll see a skipped hand position. That's the classic. I'm trying to catch the third bus of the day, am I my late or early, if it really matters which bus I'm going to catch, I have to have really synchronized time or over sample. That's just fundamental to the problem. But if we run at 20 hertz, we're looking at this clock 20 times a second, we're certainly going to see actually a stable position to start with, maybe we'll catch it wherever it is in the process. At the start, we get the first frame, we see could be a frame in transition with multiple frames and transition, and then a stable position, and then transition again. But we may or may not see it stabilized again. But somewhere in there, if we look at it 20 times a second for more than a second, we will see it in a unique, stable position. We'll take 20 samples per second, put them into a ring buffer. We gotta run at, say, exactly 20 hertz with this mod 6. We put that in a ring buffer, and now we have a machine vision service, which is a really simple machine vision service. All it does is look at a buffer of 20 frames, and look for the most stable frame. We've got an example of how to do that. It is actually not very hard. You take difference images and a threshold, and you can tell whether the hand was moving or whether the hand was stable. You mark the stable locations, the most stable, right between the two transitions. Somewhere in here was the set of frames where it was in transition. Another set of frames where it was in transition and there's a stable, most stable frame here were definitely held a position. Then that happens over and over. You've got the next second, and you've got these stable positions with these transition frames in here. You just mark them, and say, "This should be a good frame based on different scene in thresholding." Now what you can do is you could probably combine these two surfaces together. But I just wanted to show that it doesn't hurt to have a number of real-time services. They usually are not that expensive. There is some overhead to create them, that's true. But remember microprocessors, you have multiple cores that run a gigahertz. I wouldn't be overly concerned about that. We saw that it's definitely mistake to have too few services and to not have real-time traceable services. We'll overdo it here a little bit. We had this frame difference in it. It marks in this ring buffer, but it doesn't worry about actually figuring out which ones to use. Let's say we want to use this to control something, and then in our case, we're just logging it. But we might separate frame selection from the frame difference in threshold. We might run the frame difference in threshold, for example, at two hertz, which would be mod 60 from our sequencer. Twice a second it looks to see if it can mark some frames as the best frame and mark others as avoid these ones because that's when the second hand was moving. Remember it's being acquired at 20 hertz. It looks at 20 frames every 500 milliseconds. This is actually your heavy hitter in terms of WCET. It probably has the highest worst case execution time by the services you would have. But having built this myself, it's still going to be way less than 500 milliseconds. Remember this is in RAM, this is not a file system or anything like that. This is a RAM-based ring buffer that's important. There's no IO delay here to speak of, there is no IO delay here to speak of at all. It marked these things just by updating bits associated with the metadata with each frame that's kept in a ring buffer. The frame select now, it really only needs to run at one hertz because in the end, we only want one hertz observing or one hertz control. It just looks through these and makes sure that it's selecting ones that are in a good continuous sequence order in time, and it does that mod 120. I also put these at different rates just to make it interesting and to period transform them as we've discussed in previous segments so that we've got a 120 hertz, 20 hertz, two hertz, one hertz. They're all syslogging, so we can trace all of them. Every single one six syslogs so that we can validate this system. The frame select now puts the frames that we want for our control, or observation, or in our case, just logging into a new buffer to decouple with IO because there's going to be IO over here, and we never want to be coupled with IO. This can then be best effort as long as there's enough slack time where we can put it on its own core. Lots of solutions for that. No problem. Especially today where we have multicore systems. Now, we have a very robust system. We don't need anything external. We can fly completely autonomous. We can duplicate things that might fail like the oscillator. We can duplicate the whole microprocessor and have two instances of this software running and they can compare answers. We'll talk about how to do that later. There is some complication in that, but in other words, there's no problem. We can solve any single point of failure problem, so we can for example, add a second camera here if we're afraid that the camera might fail. Good idea. Then frame arc can acquire frames from both. It might then have two ring buffers here, and then we might need a second frame difference and so on and so forth. So redundancy does complicate this, but it's pretty easy to add redundancy. It stand alone. We don't need any global clock that we're relying upon. We don't have to worry about someone jamming GPS or newest, not being in range. My watch receives UTC from this but it doesn't always receive it. It depends where I am. We were pretty fault tolerant or stand alone. I argue that this is the best design with the equipment you have. There might be an even better design, but this is a real-time design. It's fault-tolerant and it works. We can duplicate anything that we're afraid might fail. It logs and observes at a higher rate than is required for, to meet our actual requirement of glitchless observing at one hertz. It selects what it actually controls or observes or logs at the rate required. So it meets Nyquist criteria for observing all that thing. It does not require a global clock. It's not dependent upon anything external and it's somewhat intelligent and can self recover. It can check itself. We can look at Syslog traces and see if it's working. If you implement this, I can assess whether your design is working just by looking at your log files. In the end, in the final course in this series, you're going to submit a log file and that's how we'll know for sure whether your design is working or not. We'll also ask peers to review your design. You can design a solution for this problem any way you want. I'm just presenting to you several designs. Some that didn't work, some that do work. If you want to use a global clock to solve the problem, knock yourself out. But you're going to have to go by a GPS receiver or setup network time protocol and that'll work. When we assess whether your designs, we will actually know, your peers will know when they review your design. There are other solutions to this. I've sat down and talked with students about all possible solutions. There are slight little variations on this frame, differencing, and thresholding, but this basically works. There is example code, by the way, for OpenCV or just a straight driver interface for acquiring frames and differencing them and things like that. Using imshow. The attraction of OpenCV is that you have imshow and some really nice features like that for the small amount of machine vision that is required for the courses that require you to use a camera. I would stick with the C code API for OpenCV if possible. Remember, C++ just complicates things. There is potential issues with memory leaks, implicit constructors, destructors, etc. So most real-time systems try to stick to standard C types of code. Once again I'm stuck behind one of my diagrams, but this is one of the better if not best solutions and we'll leave it at that. We'll leave it for you to come up with your own solution as long as you provide and over a period of 30 minutes far less than the 12-hour flight from San Francisco to Hong Kong. But if you want to see if your system works for 12 hours glitchless knock yourself out. We will actually look at your frames still. So the assessment code will look at your frames as well as your Syslogs to figure out whether you actually produced a completely glitchless observing log. This could be used for example, in the very early segments we looked at some of the systems like optical navigation, things for self-driving cars, things like that. This could be the core software for something more complicated and more interesting. That's why I picked this project. It's simple enough to get done and it doesn't require a lot of hardware but it actually forces you to come up with a good real-time design. Thank you very much.