We heard some terminology in these videos and I didn't want to just gloss over when you get out into the working world, you'll hear these terms. CapEx refers to purchasing physical things, pieces of hardware manufacturing equipment PCs, whatever memory sticks whole gamut, things you can touch. Operational OpEx. I just gave it away. OpEx, operational expenditures. Those are cost associated with doing business. So, that's paying your rent, paying heat, lighting, utilities, your salaries. All that isn't that what's considered operational expenses. SLA, Service Level Agreement. So, many businesses when they engage with a network provider will negotiate a Service Level Agreement that says you guarantee. So many megabits per second and a certain latency, certain jitter. So, if you hadn't heard those, that's what those terms mean. Talking about messaging schemes, how many are in a yes embedded system engineering program? Maybe F, all of you. Okay. So, polling scheme is where software isn't using interrupts, and it's going, and it's reading registers to something interesting happen. I'm going to drop picture for you. Hard drives, my experience when I was at Seagate, we had, I don't remember if they were are fours or are fives or couple of men there has to be five. There was a host interface block and there was a memory controller block. One outside the chip to DRM and it would flow this way to the host connection. These were SAS drives, Serial Attached because he drives. The channel was back here, read write channel going up to the heads, and the media, it was hard drive or flash devices. The flash controllers that set in the back, there was an SSD. Then, the hard drives, because of performance as I mentioned before was relatively slow in comparison that SSDs, it was acceptable for the software running on these arm processors to go read, some of these little boxes represent control and status registers, but there was plenty of time, and the budget for the software to go read a series of status registers in the house, and see if a new command that come in go. This might be like an APB3 bus, APB3. Go read some control and status registers or status registers in the channel and that was polling. Go read something to do. No, up, up. I got something to do. Now, I got to take some action. SSDs run at such high performance that there was no way this model was ever going to work. So, what we did is we stood back and we said, "All right. Well, we're going to have a host here. I got this memory controller here sentimental, and we've got the flashback, and over here, we've got her CPUs." For the high performance pass, we completely through those polling schema because we knew that we could not meet our performance objectives. So, we came up with this notion of push status. I don't know if we were the first ones ever to do it, but we talked about it, and I was involved with the architecture, and I was working with a couple other people. We were architecting the very first SSD drive. When a command came in to the host interface messages, so they had tightly couple memories sitting off of these CPUs. We created hardware to get messages into these cues. So, these created hardware to create queues and these TCM's. So, when something interesting happens in the house block, command was received who would push status into there, and the CPU would be notified, and how would it needs immediately available on the TCM, which is a very fast memory. So, there was no pipeline stalls in the CPU. Saying back here in a flashback and something interesting happened, messages would get created, and sent into these cues and from worker then, operate on him and then send command. Commands back to move data or send commands here to move data. They were two ace. So, commands could be sent to initiate data movement and when interesting things happened in hardware, messages would come back into these circular cues, and we call that push status. Making that one simple change, enabled us to get much, much closer to our performance targets. So, a polling, it's slow. It's high-latency especially over an APB3 bus are generally half or a quarter of the speed the CPU is running at. You can only have one command outstanding on APB3 bus if you're familiar with arm bus system. You only want to use it for things that you don't have to access very frequently. End up where we left of polling slow interrupts, and you find that polling is too slow, and say, "Well, let's use interrupts. Something interesting happened will interrupt the CPU. We will go explorer, read a bunch of status registers." But again, in real systems, if you've got 30 or 40, 50 registers that you need to interrogate to figure out what's going on, that takes time. Also, all the status that you have to end CPU state that you have to save on when you serve as an interrupt, it say you have to build a stack frame, and go right to memory, and go do your work, and then restore all that stuff, pop all those values off or store the CPU state, and return from the exception and even. So, interrupt can be cool. When the timing gets really tight, that starts to fall apart as well. It might work, but it all depends on the requirements of your system. Okay. Neither one of these work for us. So, we came up with this push data scheme zone as I talked about, and drew on the board over there when critical control and status information needed to be conveyed point-to-point when it occurred, but we built dedicated hardware to make that happen. The other messaging scheme is called Publish and Subscribe. Now, we see this in IoT space, works for many recipients at slight push status. But their recipients that they put in a push status notion has point-to-point. So, it's a piece of software talking to a piece of hardware or one piece of hardware talking to another piece of hardware directly with a dedicated point-to-point connection. Publish and subscribe is like that, but the recipients tell the sender, "I want to subscribe to messages from you." So, when something interesting happens, send it to me, and I got a cartoon picture that shows that. So, just super cartoon. But just imagine a whole bunch of nodes in an IoT system, so there's maybe 50,000 of the user, 10,000 or some high number here. They want to communicate with each other. So, this node here sends a subscribe message to this node that says, "If you detect a certain type of event here, send me an update status message." So, the nodes that want to hear will receive the receivers of status control information. They subscribe message to another node somewhere in the system. Then, when something interesting happens back on these blue paths, the publish messages come back. So, in a big mesh network, only the nodes that need to be informed about things like this node is only requesting status information from these two, so this minimizes network bandwidth only the nodes that need to be told that something interesting is happening, receive these messages. Here's a protocol called MQTT.org. Who's heard of that? Okay, you. This one was new. I somehow missed this when I taught this class the last time. It was new to me, and I went out, and I've been reading about it, and it's a lightweight publish-subscribe machine to machine protocol. It looks like it can run over a variety of transports. It's bluetooth only or it's anti plus only here. We'll learn about that later that protocol. But it's an example. We're not going to dive into the details of MQTT. I just wanted you to be aware of it. That it's an example of a publish and subscribe machine to machine protocol.