The take-away that we've got here is that that service concurrency from a network slice is what's significant here. So if we take smart lighting, for example, then these nodes, again, perfect example of a no mobility device relatively latent tolerancy to that. And the impact is that any particular sudden change in operational burden-- let's say that we had for whatever reason a sudden rush of people making phone calls, or sending text messages, or downloading a video, there is some type of wave that takes place or rush onto the network, the smart lighting is not going to care a whole lot whether those lights really turn on at 8:30 PM or 8:31 PM in that particular time. It's not going to make a big deal of difference. And the system can allocate those resources, so if the CPU gets congested, the network gets congested and backed up, those things can come. So that macro effect is that there are some cases that we can delay things. The example there, obviously, is that we've got dawn-to-dusk type of an indication, or traffic utilization indications, even in the quieter period of time, that allows us to do that. And we can still maintain that KPI, that Key Performance Indicator, that's necessary for those devices to still be controlled within what others might consider an unacceptable window. A one-minute delay in turning on a street light, well, that's not a critical element. But if it was a one-minute latency associated with a streaming of a video and you're interacting with someone that that delay is not an acceptable type of an example. So allocating the resources in the network allow us to manage those elements and give the capacity for some of that bandwidth dynamics inside those elements, that otherwise, we would have to oversize the network in order to handle all of those things. The other example then we talk about here is the smartphone. Again, statistics tell us that, as human beings, that seems to decrease at times during the day and increase at other times of the day. During the commute hours, use of those devices obviously increases. And once the commute hours are over, in many cases, it's not that the use of that smartphone changes significantly, is what it does is the ingress point. We may have transitioned then from the macro network into a micro network, are now off of the licensed spectrum of a 4G or a 5G and into a Wi-Fi access point. And what that means is that the network slice then that was allocating those MMEs, that Mobility Management Element that was in there, or the SGW, PGW functionality could be now allocated to different things. And at that point, we start to think about content delivery type of functionality, where those devices now are starting to consume video because we're at home where we're using them for entertainment devices. And we're not mobile, so we can reallocate those network slices and reuse those-- utilize those platforms effectively in that environment. So what are some of the challenges that, if we believe that this concept of network slicing is a good thing-- and we do believe that network slicing is a very good thing. First of all, it is that it enables the flexibility and scalability, so that as use cases become dynamic, we can adapt the network without changing the platforms, or scaling differently, or adding platforms into the network itself. And we know that 5G is going to cause the service utilization to evolve over time, just as it has with the introduction of 2G, 3G, and 4G, is that we guessed we had a really good idea of what was going to happen in the network. And it evolved over time. And it changed. Nevertheless, we were able to show that, with some forward vision into the way these networks get deployed, we can adapt those. And we certainly believe that, with network slicing, it's going to give us more flexibility as that 5G use case continues to evolve. What are some of the challenges? Well, one of the biggest challenges is really understanding how to create those slices and then manage them, identifying the macro level of what is a service and identify or dereference, sometimes we like to call it what that service is so we can associate it with a particular network slice and identify it in that area, and then manage and monitor the functionality of that network slice as they are allocated to that network, either per vertical, or per application, or quality of service. So as we go forward in time, it's also possible that you could envision a case where a device itself will be able to identify the network and tell them, not just based on what the SIM card has preprogrammed into it, and a configuration that says, look, I've been repurposed. I would like this particular quality of service or this level of service from the network slice. And that's certainly also one of the areas that may generate a monetization opportunity for the comm service providers. Additionally, there's a question about exactly where in the network do we create these network slices. It's not likely today that we're going to create the network slice at the actual RF level, where we would preserve RF spectrum. There is some conversation to that. It's a little forward-looking at the moment, even with everything that's happening in 5G, but it's not completely out of the scope. But certainly, at the local center, once we get down to the base of the tower and start moving that information back into those fixed resources, then we can start looking at how to slice those things to allocate those compute resources effectively. There's also a question then about how many physical elements are required in order to support a slice and how much headroom, because you do have to build the network with a little bit of headroom to accommodate rapid growth that was unanticipated. You certainly don't want to go back in every two, three months to add resources. You want to have a cadence that maybe is a little bit longer than that to those resources. And then what's the opportunity to generate both revenue and the cost based on these configurations you have a network slice. Optimization of the allocation is a concept also that is going to change over time as we begin to understand how the slices are utilized and the devices are utilized. As we said, as those use cases with 5G evolve, it's going to be very important that we be able to tune and adjust the resources of those network slices in close to real time. And that may be, you know, every five hours, every six hours type of thing. And it's possible, as we see that evolution take place-- and again, we want to move away from the model where we've got purpose-built platforms that have a fixed resource capacity that are going to be limited and static for the life of that platform in the network. So network slices give us the ability to do that type of dynamic trade-off. And as we said before, things like the UE, where if we build the network and we see that the traffic is going down from the UE, that doesn't mean the traffic on the network has gone. Down it's just transitioned to a different type of traffic or a different type of work load or resource. So we want to be able to adjust it to that. And then there is the possibility, as I alluded to before, is allocating some optimization of those interface resources and being that network slice and that scheduling. That's a little more forward-reaching. We may see that in the 2020-2021 time frame as we gain some traction here, but not out of the conversation yet. And then finally, we're going to have some SLAs. We're going to have some requirements for meeting these latency targets or these throughput targets. And there's going to be a mix, where we may be trading off some of those functional applications in order to bring those resources to bear inside those slices.