Hello everyone and welcome back to our third series in the Modern Campus Network Design series. You learned about modern design trends and challenges and then you explored campus architectures in the second part. Now the focus is on Cloud and Edge technologies, solutions and philosophies. We'll do an overview and talk about key concepts, compare Cloud and Edge technologies, and highlight the advantages of a hybrid approach. Cloud computing is a result of decades of evolution of concepts in the data center, such as time sharing, utility computing, and pay per use. Cloud computing is computing as a service. It is delivered on demand, often on a pay per use basis through a Cloud services platform. The Cloud isn't a physical device, but it is a method of managing IT resources that replaces local computing and private data centers with a virtual infrastructure. In the Cloud model, users access virtual compute, virtual network, and virtual storage resources that have been made available online by a remote provider featuring rapid and automated provisioning. Quite useful for companies that need to scale their infrastructure up or down quickly in response to fluctuating demand. A Cloud infrastructure can be managed much more efficiently than traditional physical infrastructure, which typically requires that individual servers, storage, compute, and networking components be procured and assembled to support an application. With Cloud Infrastructure, DevOps teams can deploy infrastructure programmatically as part of an application's code. Cloud Infrastructure is scalable. You can easily spin up new services and capacity as needed and drop them during slower periods plus these services can typically be managed from a single dashboard. The National Institute of Standards and Technology, or NIST, has defined Cloud computing as a model for enabling ubiquitous, convenient, on demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. According to NIST, cloud computing has five characteristics. Broad network access, the computing resources are available over the network, and this can be an internal network or the Internet. The resources can be accessed in a standard way so that different platforms, such as PCs, mobile devices, or thin clients, can use these computing resources. On demand self service. A consumer can provision the computing resources themselves as needed fully automated, without human interaction, with each service provider. Resource pooling. The computation resources are pooled so that they can be served to multiple users at the same time for multitenancy. Rapid elasticity. The computing resources can be elastically provisioned and released on demand. To the user, the computing resources might appear to be unlimited. It's a measured service. So the computing resources are metered. The information from the metering system can be used to optimize the resources to the demand or for billing purposes. Cloud Infrastructure is not so different from typical data center infrastructure, except that it is virtualized and offered as a service to be consumed via the network. Server compute, infrastructure, storage, applications, and security are all key components of cloud infrastructure. The cloud consists of front and backend platforms and the delivery network that connects them. The frontend platform can be an end user device, such as a PC, tablet, or mobile phone, or it can be a computer network. The backend platform is that virtualized infrastructure or software applications being accessed via the delivery network, which is typically the Internet, a WAN connection, or maybe a VPN tunnel. The setup, configuration, monitoring, and optimization of virtual machines and Cloud services all fall under the heading of Cloud Infrastructure Management. These activities typically happened through a web-based interface or APIs. Cloud computing has become popular because it has benefits compared to traditional computing. The first benefit I think of is financial. Cloud computing provides a consumption based pricing model. Instead of investing in predefined hard and software resources, you only pay for the resources you actually use. This minimizes upfront capital expenditures or CAPEX, so you don't pay for resources you don't use. Another benefit is elasticity. Resources can be increased or decreased on demand. The resources can be scaled up adding more CPU Storage or network capacity to an existing compute node or scaled out adding more compute nodes for some application. Often, scaling happens automatically based on a set of rules for an application. Suppose a news website posts a popular article, the system can automatically increase network capacity, then decrease it as the article gets fewer hits. Another big benefit is rapid deployment. Resources can be deployed through an easy to use interface, often automatically. You get auto update for patches and OSs, keeping things current and secure. Plus, cloud providers often provide prepackaged application services that ease the deployment of new applications, like machine learning, big data analytics, and Internet of Things or IoT. Organizations use cloud services to test and develop software applications. Because cloud Infrastructures can easily be scaled up and down, organizations can save cost and time for application development. What about developers software on request? Software as a Service provides software on-demand, which helps organizations to offer users the software versions and updates whenever they need them. You save data, backup data, and restore data. Data protection can be done cost effectively and on a very large-scale by transferring data to an external cloud storage system, The data can now be accessed from any location. You can analyze data so the data from all the organizations services and users can be collected in the cloud. Then cloud services such as machine learning, can be used to analyze the data and get better insights for more and better decisions. You can implement new services and applications. Organizations can gain access to the resources the organization needs to meet their performance, security, and compliance requirements fast. This makes it easier to develop, implement, and scale applications. Now the concept of cloud computing is available through two basic cloud computing models. One model is a public cloud, a shared cloud infrastructure that is owned, maintained, and managed by a cloud provider like Amazon's Web Services or Microsoft Azure. Benefits of public cloud include on-demand scalability and pay-as-you-go pricing. A private cloud runs behind a firewall on your enterprise intranet or is hosted in a data center that is dedicated to your organization. It can be fully configured and managed according to your company needs. A hybrid cloud is a blending of the two. You leverage the strength of each cloud model to enable flexibility and scalability while protecting sensitive data and operations. You should understand the strengths and weaknesses of public and private clouds. To clarify the advantages of a hybrid approach. In general, the public model has the following features. No upfront investments, subscription based, flexible pricing based on SLAs, fully managed service, and allows the organizations focus on investments in growth instead of IT. Public cloud consumption models are suitable for dynamic growing computing needs or when an organization needs an innovative service that takes a lot of expertise, like machine learning. A public model might introduce security and compliancy issues, and for a certain very large scale deployments, operational costs can be high. Now this is often offset by the low CapEx or upfront costs. A private cloud is dedicated for a single organization. It is a single tenant from the perspective of the top level organization. But the business units or BU's might perceive it as a multi-tenant system. The single tendency might involve higher initial costs or CapEx because resources cannot be shared with other tenants. A private cloud may be located on-premises or off-premises hosted by a service provider. In an on-premise private cloud, data is of course, stored at that site, which could make it easier to control compliancy issues. The problem with an on-premise private cloud is that the organization must have the skills to manage the cloud and is exposed to all the risk of managing the cloud service, service availability, compliance, scalability, and technology updates. Managing a cloud requires a different skill set than being excellent in Managing IT infrastructures. The cloud concept requires acting on a higher conceptual level and the ability to keep up with fast changing cloud technology. Hybrid clouds have at least a private and public cloud site to the architecture. Multi-cloud architectures include more than just one private and one public cloud. Hybrid clouds are desirable because they can deliver the best of both the private and public cloud infrastructures by letting the organization move workloads back and forth between the two platforms. Applications can be partitioned to reside in both the public and private cloud. Now, in a hybrid environment you must decide where to place which part of a workload. Now, these decisions are very dependent on the situation and will influence the user experience. The main factors that will drive workload placement decisions are user-experience, cost, security, compliance, and legislation. For instance, for a big data solution, the data might be placed in the Cloud for economic reasons, but the user interface might be put as close to the user as possible, meaning on-premises, of course. Now, in another situation for a customer database, the users might be sales representatives that are not at the office. They're all over the country on mobile devices. In this case, the user interface could be placed in the public Cloud, while the database service itself is running on-premises. Another example of this hybrid approach relates to Aruba's Edge Services Platform or ESP. Aruba has architected networks to support a Cloud-centric compute model for some time now, all that data from the devices over a network to apps in the Cloud, the old Cloud and mobility area that most networks are architected to support. But the Edge is the opposite of this, highly distributed to all the places we live, work, and play. To convert this valuable data into new operational efficiencies, Edge-generated data must stay at the Edge to be analyzed and acted upon to improve latency, economics, and compliance. We unite compute, storage, networking, and advanced applications at the Edge for new and improved business outcomes, thus the Intelligent Edge. Now, I'm thinking about IoT and security data, contact tracing data, and much more. But the Edge still connects to the Cloud and it's a tightly integrated relationship. The Edge collects data and uses it to take action, but relies on AI and ML models in the Cloud. Consider this IoT example. These IoT devices and applications require quick, near real-time decision-making. Some sensor detects something like temperature, pressure, or a fluid level. Perhaps this is some maximum level indicating that the actuator should shut off a heat source, open a pressure relief valve, or close a water valve. If all your intelligence is in the Cloud, that signal must travel up through access, aggregation, and other layers up to the Cloud for data analysis, a decision is made, and then signaling back down to the actuator. With applications like these we need answers now, we want intelligence at the Edge. It's not just IoT, by the way. I did a large hospital deployment where nursing staff used a tablet based app to maintain patient awareness, and when lives are at stake, optimal network performance at the Edge is vital. To make the Intelligent Edge effective in driving business outcomes, you still need the Cloud. You can't simply upload any old legacy application to the Cloud and expect modern outcomes. Applications must be designed with modern service-oriented architectures so the app and the Cloud can coexist in a 1 plus 1 equals 3 relationship. You need a Cloud designed to support Edge services, and you need network components designed to leverage those Cloud services. This is one of the key ideas behind Aruba's Edge Services Platform, or ESP, an AI-powered Cloud native platform that unifies all networking domains, wired, wireless, and WAN, and all locations, branch, campus, HQ, data center, and even remote workers and more, all unified into a single platform. Aruba Central is a Cloud native single pane of glass for Aruba ESP operations, it helps you to deploy networks faster and resolve problems quickly, freeing up resources for more meaningful work. Now look at the advantages of a Cloud-based solution like this. All domains and locations are managed from a common platform. This facilitates the use of AI and ML solutions like Aruba's AIOps. This is all in addition to the other Cloud-based advantages, relieving IT staff of maintaining these systems, and the ability to quickly scale up as necessary. You get intelligence at the Edge where the action is for efficient data flow and responsiveness, but centralized Cloud-based Aruba Central for management, helping to provide a unified infrastructure augmented with artificial intelligence operations, or AIOps, and zero trust network protection. Well, that about wraps it up for Cloud and Edge technologies. We did an overview, comparison, talked about some hybrid approaches. Come on back for the next video in this Modern Campus Network Design series, where we focus on network management systems or NMS.