Welcome to the section about hardware ingredients, configurations, and how those then the appropriate stake look like. Here we will start with the ingredients. And at the beginning, just the little introduction about the difference between normal IT and cloud environments, compared to what we're looking at network function virtualization computing is that in cloud computing, there is little data requests coming in. There is significant amount of compute, and then there is a real-- a comparatively small amount of data coming out. For example, SQL requests into relational database is one of the workloads that we are having over there. On the other side, we have for network virtualization, and we are virtualizing network appliances. Then we have high data rate of data pull in traffic. And this is a number of cases going through the element. So the amount of data high going in, high going out. It can also go bi-directional and corresponding compute power. The amount of data is much lower for NFV. So this is then the resulting in very different hardware configurations that will be explained throughout this presentation. So looking at where we are today with NFV is a lot of control plane workloads are virtualized now in production for many years and number of data pulling workloads is in production much lower. This is what for some time we are doing, and now we have good success putting it in. And the reason is that control plane was both technically easy to do compared to data plane, and we'll explain why in the second. And also, it was fitting on the IT configurations of servers so that the usual IT departments, when they are purchasing servers from the vendors, technically, this workload looking like other IT workloads. It was fitting nicely onto these type of hardware configurations, so it already existed over there. While data plane on the other side is characterized by this high data plane traffic rate-- examples are gateways and routers as network elements-- we measure performance there in packets per second. The hardware configuration that is required there is throughout this presentation called data plan server. It consists of normal IT volume server. In this configuration, it needs to have appropriate number of network ports in configuration that we will call balanced I/O so that both sockets are being fed with traffic. And because of the software that we use, typically, we'll include data plan development kit, and pull mode driver is spinning a core. They will need appropriate amount of cores in the CPUs, so we will use high count CPUs, like 20 or 22 core. On previous generations like E5 or on the current one, on Xenon scalable, also this goes to, for example, 61, 52 or 61, 38 are those 20 plus core CPUs there. And the current network adapters that you are using there is multiple ports per adapter, even number of adapters feeding data plan traffic into the server on 710 series controller. Here we summarized what are the major requirements on NFV infrastructure. So the server already described components that we are using to build it. Beyond the usual CPUs and adapters, we can put additional accelerators where needed and where they are beneficial. It can be fixed function. Already introduced in other presentations was QuickAssist technology, or it can be programmable like FPGA. And there we need to optimize server for high amount of traffic coming from the outside of the server getting in and then also between the virtual machines. If they're chained, then also we have east-west traffic. So on top of it, we don't just have simple basic workload placement. We need to have a proper layer that does the orchestration, which needs to be both platform aware so that it knows how to recognize all the platform configurations and appropriately place the workload and also needs to be serviceable, meaning that if service consists of multiple elements, it needs to know how to instantiate them, configure them correctly until the whole service is available. And some of the differences to IT environments are, for example, we need much more deterministic performance. We have much stricter key performance indicators on these high throughput and also on low latency and control jitter. And then we have typically compared to most IT environments in comms vertical, we have much higher availability requirements. And then the resiliency both on complete failure or on the platform impairments. And then there are additional being regulated vertical. We have regulatory environments. Geolocation needs to be well determined, and this makes it generally, both technically, and how it gets implemented in processing organizations-- very different type of environment for data plan workloads. So we could go and spend quite a lot of time optimizing all of these layers. So what after many years of practice came up as a conclusion was optimize where needed and abstract where possible because putting proper abstraction between those layers makes onboarding easier. Lifecycle management over years of production becomes easier. And then still, a number of cases, we will need to go and do very detailed optimization of those specs. [MUSIC PLAYING]