2 00:00:02,070 --> 00:00:05,640 Let's take a look now at some of
the functionality in this scope and operating at the edge of the network. So we're going to talk a little bit more about this, but again, client devices, those for the most part, as we should understand, are not going to be under control of the COM service provider, but they provide that input-- that ingress and egress point, or, if you would, the termination. They going to have access, whether that access is wireless, the macro network; whether that access is wireless through some type of unlicensed spectrum, whether that's a SigFox or a LoRa; Wi-Fi type of access for some of the IoT devices we spoke of before; or even possibly whether those are ingress through an MSO, some type of set-top box or device that provides from a cable access. We're going to spend a whole lot of time talking about that, but certainly we recognize that it is an ingress point, and from a user perspective, again, whether that's a handset user, whether it's a device user, whether it's the device itself from an enterprise level. There is some agnostic aspect of it, and what we really want to look at is what happens as we get into that internet access network, and then to the edge of the network itself and what we mean then by functioning in that edge of the network and the functionality that takes place in that edge of that network before we began looking into the functionality in that regional network or the data center. So we're going to start focusing a little bit on those elements that belong in that edge of the network. So first of all, where is the edge of the network? Sometimes we can get into a little bit of a debate about exactly how far to the left of that previous diagram we're talking about when we've got the edge of the network, but certainly we're going to exclude devices that are the end elements themselves from that conversation, and maybe we pick it up with the on-premises equipment-- that would be a CPE from that bit of functionality if it's wired. If it's a wireless activity, then maybe it's carrier-grade wi-fi. It's certainly-- as we start looking at the functionality inside the RAN itself, we start having access that we can talk about as being at the edge of the network. And a microcell is certainly are areas there. But what we really mean when we start talking about the interest in that edge is being able to take interesting compute resources-- and not just transport resources, but compute resources and moving them closer to that end user point and considering that as the edge. Sometimes we break the edge up into different areas of the near-edge, the middle-edge, or the far-edge, sometimes you hear that type of description that's in there. But being able to bring interesting compute resources from deeper in the core of the network to that edge so we can do things like reduce the latency or reduce the load on other elements inside the network to improve the performance or improve the user experience. And sometimes it's also to provide resources to that enterprise or to that consumer that may be more interesting to them, and hence, more valuable to them in that environment. So what are some of the key drivers then? As we talk about that element that belongs in the middle the edge, whether it's the far-edge, near-edge, or middle-edge, it really is that low latency. And sometimes we can't achieve that without taking that functionality and moving it closer to the end, and that's simple physics. Is if we've got to carry the bits a long way, there's a time of transport that takes place there. Reducing that transport distance is going to reduce that type of a latency. We may also reduce the momentum of a massive amount of data generated by the IoT type of devices, and maybe we can reduce costs. If we've got a large storage requirement for whatever reason that's necessary, as it may be efficient-- I gave you a manufacturing example before, and we talked about how it may be realizable to put more storage or more functionality closer to that user and maybe not to other users or other data centers around the area. And that's one of the examples. So if you've got a factory type or an automation type and you need storage as a service and you can put that closer to the edge, there may be an advantage to that from an infrastructure and management standpoint. Data sovereignty is a significant issue in some geographies. And again, we look at large COM service providers who may have a multinational perspective and still have a national data sovereignty concern, edge computing can come into play when we've got to maintain information specific to a location because of data sovereignty issues. Privacy is very similar to that. It's a different driver from an administrative level, but it comes into play in the same way. Data privacy may drive us to do things for storing information closer to the edge, closer to the user, or closer to the end point because of some security concern-- or regulatory requirement in some case. Context awareness is certainly a bit of functionality. We talked about platooning of trucks before, and if that were the case, it may be possible that we begin spinning up workloads ahead of an anticipated use where we know those devices are moving and we can predict their pattern and we can provide those resources in sort of a leapfrog fashion, get the resources spun up in that area that those devices are going to be moving towards before they consume them and then spin them down once they're no longer used further back into the network. There are a variety of scenarios that we can look at where connectivity is going to be challenged or limited, and in those cases, again, being able to provide services at the edge may be there. At the end of the day, it's about a better experience, it's about managing the network more efficiently, and it's about improving our ability to meet the needs of those customers by providing faster transactions. So that's a big motivation behind what happens at the edge. It's not just there, it's just hype, really. It's that we are certainly seeing applications from e-gaming, from sports activities, sports stadiums, AR and VR where we've got very low latency. This is a physiological issue. Again, the human ear is really good about integrating things out and buffering and sampling time, but our visual system doesn't integrate out quite the same way across this larger time intervals, if you will. Is that we've got a five-millisecond, 10-millisecond window for some AR and VR for using a headset or interacting with some device in order to keep us from getting disoriented because of potential latency between those types of areas. So when we look at that and moving functionality closer to the edge, we meet those. A variety of other examples, either from hospitality or manufacturing, assisted driving, smart cities, smart infrastructure-- all of these elements have the possibility for placing demand on the network that can be optimized by taking bits of functionality and driving those closer to the edge. 163 00:06:35,860 --> 00:06:39,210 [INTEL JINGLE]