So, let's spend a few moments talking about visual cloud and how that comes to play inside our transform network. To begin with, let's try to set a foundation. Let's define what we mean when we say visual cloud. In order to do that, we're going to use term visual computing, as a subset of it. So, visual computing is a discipline inside computer science for handling images, video images and the modeling of those images into a 3D. Visual cloud, not to be confused with it though, is visual computing applications. So, it takes that concept of that computer science discipline of visual computing, and embeds them into applications that rely on the services that we'd find in the cloud itself. So, cloud architectures, cloud scale computing and storage, and ubiquitous broadband connectivity, to get that information that we're operating on from the source to the edge where it's being used or created, and to the compute resources at the application level and then back. So, when we look at that ecosystem that's in there, the players on this are going to be the cloud service providers, as well as communication service providers, media and entertainment sources for that information, as well as the processing for the manipulation, and then consumption or even origination at enterprise and government academia type environments. All to provide this immersive media environment, again that is creating a tremendous wealth of data traffic inside that communication network, that we've spent some time talking about. So, let's dig a little bit more into this into this visual cloud, which is visual computing applications rely on the services of the cloud, or the communication service provider through the network. So, when we talk about those applications were really interested in the workloads themselves that utilize the resources of those applications. Depending on exactly what that application is, we may have different locations inside the network were for optimization reasons or latency reasons that we want to provide those application resources. So, if very low latency, then chances are those resources need to be no further away than a base station. So, you think in terms of less than tens of kilometers, and in some cases a couple of kilometers, in order to meet that less than five milliseconds, that's a round trip delay so that the transport itself is not a significant portion that delay in a handful of microseconds type of thing. But the actual compute resources themselves need to be compressed as well to meet that. A little further in a network maybe we can absorb at 10 to 15-millisecond delay, as we get closer into a regional data center or a statewide data center tens to hundreds of kilometers. Latency is approaching 50 milliseconds, maybe we'd like to see a less than 40 milliseconds. Then finally, deep into the network itself or even a continental type of an operation, where we've got hundreds of kilometers or thousands of kilometers potential in some cases of latency, to reach very heavy density processing capabilities inside our environment. So, this entire media landscape then, that realizing these applications that go into the visual cloud really do rely heavily on the Intel architecture, the CPU, capability that we find in [inaudible] processors and network card processing. As well as in some cases, application accelerators whether those be on FPGAs or content delivery type of functions where we've got media processing capabilities they're enhancing it, or encryption processing for securing that information. One of the things that again we occasionally lose sight of unless we dig deep down into, it is the fact that the source of that information and the consumption of that information, the source information, storage of the information, may not natively be in that same format. That comes into the application, where we talk about encoding and transcoding. So, encoding says we're going to take that wrong information, we're going to represent it digitally by some standard, whether it's a recognized standard proprietary, and then have it consumed at some point. But if there's a device that needs to consume that in a different format, a different encoding, that's where transcoding comes in. Transcoding is actually the function of decoding and then re-encoding. So, if you think about media stream that has been manipulated into some type of format, I need to take that media stream and convert it back into what might be a raw format. I'm using my hands here to show that maybe the raw format is of greater capacity or greater information contact. Then, I re-encode it into that consumption content. This is certainly one of the functions that takes place in a variety of media streaming and CDN, even in storage operations. Sometimes, that encoding and decoding needs to be lossless. That is, none of the information that was original is eliminated in that process. In other times, due the nature of the information, we're allowed to be somewhat lossy. We'll allow some of that information that was from the original content to be lost, due to the expediency of performing that type of transcoding. Those types of services work really well, for delayed delivery type of an operation. Sometimes, we look at it in real time. But, if you think about the CDN, there's sort of a delayed delivery where it's recording. We're transferring the information through space as well as through time in that case. Then, we can we can operate on it stored in. Then, extracted into a variety of multiple outputs, as we source it back out to the consuming point.