This concept of Cloud where we have public, private hybrid and then this idea of an edge. We've architected a network to support a Cloud-centric compute model for some time now and all that data from our devices over a network to apps in the Cloud. The old Cloud and mobility era that most networks are architected to support, follow this more traditional line. But the Edge is the opposite of this. It's highly distributed. This is where users are working from home. They're working on the road, they're mobile. You have users accessing the interface from anywhere. It is highly distributed to all places that we live, work, or access this material or this information. To convert this valuable data into new operational efficiencies and revenue streams, Edge generated data must stay at the edge to be analyzed and acted upon in order to improve yearly and see for economics and in some cases for compliance. It would be amazing if we could have the features in the Cloud brought to the Edge. This is what we referred to as an Intelligent Edge. What a Aruba is striving to do is to unite this idea of compute storage networking and advanced applications as close to the edge for new and improved business opportunities and outcomes. Thus the Intelligent Edge. But the Edge still connects to the Cloud and it's tightly integrated in that relationship. There's some use of this software-defined term that you may have heard of from the last decade or so. But ultimately what we're going to dive into is how some of these strategies like SASE, S-A-S-E, which have some slides on coming up here, look to improve upon the Edge and to really coordinate between the services in the Cloud and the infrastructure that you own, Aruba switches, Aruba access points, Aruba controllers, they're at the Edge themselves. A comparison here with IoT devices like sensors and actuator, these kind of Internet of Things devices whether they're wired or whether they're wireless over here on the left-hand side. These devices require, in some cases, very quick near real-time decision-making. Some sensors may detect something like say, they detect a change in temperature, or pressure, or fluid level, or something like that. Perhaps this is some maximum level indicating that the actuator should shut off a heat source or open a pressure relief valve or close a water valve. If all your intelligence is in the Cloud, that signal must travel up through the axis aggregation and other layers up to the Cloud in order to achieve data analysis and the artificial intelligence, which is really just more scripting to decide how we should react. A decision is made and then a signal is sent down to the actuator. With applications like this, we need answers immediately, we need them now. You would like to have something that's as intelligent as possible at the Edge. If you can put that intelligence down at the access layer, at this edge layer here, then your IoT device could be hugely more reactive and ultimately more aware. Take an example of a large hospital employment where you have nursing staff using a tablet-based app to maintain patient awareness when lives are at stake, optimal network and performance at the Edge can be vital. To make this Intelligent Edge effective in driving business outcomes, you still need the Cloud, you still need this orchestration, this intelligence, the scripting or applications that are in the Cloud themselves so it's distributed. Its compliance is secure and it's available. But you can't simply upload an old legacy application to the Cloud and expect modern outcomes. Applications must be designed with modern service-oriented architectures so that the app in the Cloud can go and exist in a 1 plus 1 equals 3 relationship. You need a Cloud designed as Word Edge services, and indeed network components that are designed to leverage those Cloud services. This is one of the key ideas behind Aruba's Edge Services Platform or ESP that we're going to talk about. Cloud versus Edge in a hybrid approach and artificial intelligence or AI-powered Cloud-native platform that unifies all network domains, wired, wireless, and wireless LAN, or WAN is held in Aruba central. You have locations such as branch campus, HQ, different data centers, and even remote workers onto a single unified platform. With Aruba central this is a Cloud-native single pane of glass or Aruba ESP operations, helping you in order to deploy networks faster and resolve problems quickly while freeing up resources for more meaningful work. This gives you that intelligence at the Edge, this ability to apply policy through a single pane of glass in the Cloud, and then that policy is acted upon through your Aruba devices at the Edge of your switches, your Aruba switches, your Aruba access points, and your Aruba gateways. This Edge Services Platform or ESP, as an industry's first AI-powered platform designed to unify, automate, and protect the Edge. We're going to dive deeper into ESP and a more practical sense next series of videos, the Part 2 series that we'll get into. But here's a summary of the features. You have the Aruba, AIOPS, and that's central to help automatically and continuously optimize network performance. This uses some artificial intelligence, machine learning, and network user-centric analytics to preemptively deal with issues before they happen. In other words, it actually proactively either can react to problems and fix them as you've already predetermined, or at least, notify you proactively when things are just not behaving the way they typically would in your type of environment. Quickly allowing root cause analytics in order to resolve your tickets and your issues very fast, continuously optimizing configs, eliminating change, guesswork, and see and secure what's actually on the network. Then unified infrastructure such as your Aruba access points, your Aruba switches, your Aruba gateways and 5G and IoT devices. Keeping a consistent experience to users, whether they're on the wired or wireless side. This can be done whether you're at campus, whether the users at their branch, whether they're accessing datacenter or accessing remotely on the road. Then lastly here, we have zero trust. This allows our customers to address the increasing security threats from the amount of IoT devices that are out there, while also simplifying the process for software defined architecture for wired, wireless, and STN, and this adaptive trust framework here allows tunneling from the wired side through dynamics segmentation. This idea of universal ports, no matter where you plug in, your profiles will follow you, and very granular device visibility, even for devices that aren't necessarily logging in, we can still have some methods to identify not only who's on that device or what the device is exactly that is accessing the network. This Edge Services Platform, this ESP artificial intelligence, it's a responsiveness here at the edge in order to optimize data flow, improve operational efficiency, but centralize Cloud-based using Aruba central for management. This helps to provide a unified infrastructure, augmented with artificial intelligence operations, which is the AIOps and Zero Trust Network Protection. No matter what's plugging in, we're still validating exactly what it is and that's not being spooked. This is a Aruba's solution or approach towards that intelligent edge that we're striving towards. Now, taking a step back here and looking at Cloud regions and availability, you have some major regions across the world. You have the US west and east, you have European west, you have Central America, Australia. These regions are independent geographic areas. A region should have at least one availability zone, but most of the time a region will have more than one availability. The availability zone is an isolated location within a region with its own network gain and power and cooling. The idea is, you want the zones to be geographically separated enough that if there's a major outage on the eastern seaboard, the western seaboard shouldn't really be impacted. Or if there's an earthquake or a natural disaster, typically, a way to keep at least the data center up and running. While at the same time you want to have enough zones, east, west, central to where you're latency is optimized. High availability and data locality are the key drivers there that we're looking at. With high availability, you're accommodating corporate expansion into other regions, other countries, and your services can then be spread across multiple zones or regions. The data locality places apps as close to users to improve the user experience. If I'm in Australia and I'm trying to access some labs that are being hosted over here in the United States, I'm going to have a bit of latency in what I'm clicking on and using. Not terrible modern day, it's pretty nice, but it's still going to be quite a bit. If I could have servers in Australia hosting the labs, and those labs could just be available in any of these available regions upon demand, simply do a back-end transfer and they're up and running, that'd be great. A much better experience for courses that we run over here. Cloud terminology, we have Cloud applications, a web-based software program or hosted applications such as those offered by a software as a service vendor. You have Cloud brokers, an intermediary that has access to several Cloud services. Like a managed service provider, I guess you would think. You can basically tell them what your needs are and they'll go out and they'll navigate the waters of what service would be best for you and assist you in setting those up. Cloud management platforms maximize efficiency and operational costs, bringing everything together into a unified dashboard. This is your front end for whatever Cloud services that you're interacting with, either through their browser, like the AWS EC2 browsers, or the backend, or could have a back-end tie in through an API, where it's scripts, this stuff. You simply request more services and poof, those servers are already pre-programmed and ready to spin up. Then ultimately, Cloud migration, moving your applications that were hosted On-premise to the Cloud, either to be public from then on out, or to be in a hybrid type of deployment here. Cloud native are applications that are developed specifically for the Cloud. They're not really meant to be hosted On-premise. Cloud service providers, virtualized data centers offering Cloud computing services to customers, typically, through self-service platforms. Again, like the Amazon EC2 would be a good example of that if you're familiar with that. Then Services can range from raw infrastructure to software as applications platforms are, like you said, you just want the hardware. A container is an interesting term, so containers are a type of virtualization enable the virtualization software applications by providing lightweight runtime environments, almost like sandboxes, but a step further than that. They include everything the app needs in order to run, making them highly portable. Many of these containers can be spun up almost instantaneously as long as the operating system that they're running on is the same. Extremely easy way now to spin up the same basic web server that you need 12 times with extremely very little overhead. Then hybrid Cloud combines your public and private Cloud into a seamless blended infrastructure. We've talked about this already. Hypervisor, you're not familiar with this. This is a piece of software. It's an operating system that you would install on a physical chassis, a physical piece of hardware like a server and that server then would be able to provide a virtualized environment where you can take the server resources, the CPU, the RAM, the storage, the network, and all of that can be divided up into dozens, if not hundreds, of virtual machines. The management system allows the VMs to share the hardware resources on the server that you install. If I have a brand new piece of metal, I install a hypervisor like ESXi from VMware or Hyper-V from Microsoft, and after that, I can now install virtual machines. Measured services, Cloud provider monitor and meter, resource usage, and bill accordingly. Middleware, software management layer that sits between whatever application you're trying to run, a CRM, or an email service, or whatever, and the actual network itself. Enables network devices in order to communicate. Often used to support complex distribution systems, we won't get too much into that. Microservices architecture, small modular programs that are linked together to build complex applications, self-contain, agile, and can be individually deployed and updated. Because they're self-contained, they're extremely quick to spin up. Then Multicloud, you've got businesses often using more than one Cloud provider, where one might be your IaaS, another one might be a PaaS, one might be both, it just depends. These days is not just one provider that you sync all your resources into. Multi-tenancy is the ability of a company to take those resources and divvy them out according to a department or another customer, whatever the needs are. Resources are dynamically assigned, typically according to demand. But there is such a thing known as VM sprawl, where if you're not tracking it, you could end up spinning up virtual machines that have no idea that they're actually running and that can be very expensive and from a security standpoint, not good. Serverless computing enables developers to run only independent functions when an event is triggered instead of full applications. Then server virtualization is the big one, this is the actual virtual machine where once you've taken a server and virtualized it up, that virtual machine then can be abstracted and V-motion or live migration, it could be moved from one physical server to another, making it very resilient. The server as a virtual machine is technically detached from the physical machine it's on, you can certainly set rules to say only one of these machines can run one server at a time. Or you can say, if this machine is virtualized on this server, these other virtual machines should not run on the same server because they might have resources or there may be some policy that doesn't allow that. But ultimately, the idea of virtualization is that other than those exceptions I listed, it's just very flexible. We don't even really do change management or anything, we let the resources level themselves out so you have better utilization of the hardware that you're on, rather than running one little application on a big server and letting those resources go underutilized. For Cloud terminology, we have software-defined infrastructure and workload. I mentioned software-defined earlier as in software-defined networking. Generally, as a real summary of that terminology, the one tenant of software-defined is that is controlled by a script or an application, so there's typically very little human involvement. Apps specify and configure needed hardware as part of their code and is a building block of all Cloud technology. This is the idea of a software-defined infrastructure, building your connections virtually, building your load balancers, building your firewalling, building your networking, switching; all can now be done in a virtualized way. Not just servers, but networking equipment. Then lastly, workload, I talked a bit about how virtual machines are going to be distributed across. A workload is a discrete computing task within the context of running an application. Application workloads can then be distributed across different systems. If you see that the demand for your email client software is rising to 80 percent or whatever, additional servers can be spun up until that demand gets pushed down to a more comfortable level like 60 percent or whatever. Then as people log off through the day, all of those servers can be spun down to save on revenue. That's going to be it for our discussion on Cloud terminology. I know that was a long one. Thank you for your patience. The next section we're going to look at is Cloud products and look at a comparison between key technology philosophies and solutions that are available that we have out there. Thank you very much for your time. I'll see you back in just a minute.