Hello everyone and welcome back to modern campus network design Part 2. Well, I've talked about trends and challenges in Part 1 and so now you move on to explore campus architectures. Now you see our three topics here related to typical industry solutions, practices and technologies. Let's start right in with campus architecture components. I look at this simple yet typical campus architecture block diagram, various wired endpoints connect to your access layer switches along with some access points for wireless connectivity. These access layer switches provide layer two switching connectivity. Here's where you configure things like VLANS, initial quality of service or QoS marking and Power over Ethernet or PoE features. The typical campus must typically purchase several dozen or a 100 access switches. You're looking for a balance of cost versus port density and PoE capabilities along with Edge security and resiliency features. These devices do not provide layer three routing functions, we save that for the aggregation layer switches. Each access switch is dual homed to both aggregation switches for redundancy. The aggregation switches have multiple connections between them for improved performance and resiliency. You use far fewer aggregation than access switches but you need performance, reliability, and resiliency here. These aggregate routers serve as default gateways or DGs for each end user VLAN. The DG routes packets from your local VLAN or a subnet to the rest of the known universe, all other reachable destinations. One such destination might be services hosted locally at the campus. With the Aruba solution, both wired and wireless users can be securely tunneled to redundant mobility controllers which may in turn be controlled by a set of mobility conductors. This hierarchy creates great scalability and since everything travels via encrypted tunnels to the controller, you minimize attack vectors and thought bad actors. Other destinations might be off-campus. Aggregations switches connect to one or more Edge security devices which then connect on out to the Internet, remote offices and Cloud-based services. Now this has been and continues to be a common campus architecture. However, other ideas are emerging. Some folks are talking about bringing layer three routing capabilities right on down to the access layer. Now each method has certain advantages and disadvantages but that's beyond the scope of this course. I just want to make sure you are aware of current trends and possibilities. I'll simplify this diagram now to focus on this relationship between access and aggregation layers, a so-called two-tier campus design. This blend of economy and performance works great for small to medium size campus deployments but many deployments grow over time and you may need to expand. You can separate this two-tier collapse core into separate aggregation and core layers to create a three-tier design. This offloads layer two processes from the core and provides better isolation of fault domains. For large deployments, this makes the system more manageable, stable, and scalable. Here's why. As the campus expands, each building has its own set of access and aggregation switches as you see here. These connect back to the core switches of the main building. These and the other blue links are all layer three routed links, a so called routed core design. It's as if each building has its own layer three isolated two-tier system, connected to a high performance redundant core. Now this routed design provides a certain natural layer three isolation with additional isolation available from advanced routing protocol features like OSPF stub areas, route summarization and route filtering. Now all the gray links are layer two switched links as typical with a two-tier design and the bottom two layers of a three-tier design. In addition to these performance and isolation advantages, there are other motivations for a three-tier design such as when the number of access switches exceeds the number of available aggregation ports to connect those switches and when there are a limited number of fibers available between access switches and the main distribution frame or MDF that houses your core. You may also have distance limitations between access and core depending on the type of fiber optics in use. Note that I've shown a design where each building has its own access and aggregation switch sets, but this is all based on need. In some of my larger hospital design and deployments, a single 12 story primary care building might have several aggregation and access sets, while smaller outline buildings might have their own access switches, but share a set of aggregation switches in one of the larger buildings in the vicinity. Now, in these hospitals as in most other deployments I've done, redundancy and resiliency are primary concerns, so let's look at that. You want layer 2 redundancy so you connect switches with redundant links. The issue is that switches flood broadcast, unknown unicast, and multicast or bum traffic out all ports. Traffic loops around forever, creating broadcast storms, bringing your network to its knees. So you enable the spanning tree protocol or STP, which automatically detects potential loops and disables redundant links by placing them in a logical blocking state. Now, if an active link fails, a blocking port is automatically re-enabled. Pretty cool, but this fail-over mechanism can be slow, and many of our links might sit unused. What a waste of bandwidth. These and other complexities let us too look for alternatives to the spanning tree protocol. One method is to use link aggregation or lag, which bundles multiple physical links into one big logical pipe. Processes like STP perceive this as a single link, so no blocking occurs. You get performance and resilience without the complexities of STP. Then we expand these capabilities with multi chassis link aggregation or MLAG. Access switches one and two can run MLAG to AG 1 and AG 2. Now, this is great for layer 2 redundancy, but what about layer 3? There's the inherent layer 3 redundancy features baked into routing protocols like OSPF and BGP, but that's for another course. I wanted to talk about redundant default gateways for endpoints. Now, host 1 has IP address 10.1.1.100, with a default gateway of 10.1.1.2, AG 1. Now, if AG 1 fails, host 1 is essentially down. True alternate default gateway 10.1.1.3 is available, but host 1 won't use it until you manually reconfigure the host. Now, our savior here is the virtual router redundancy protocol, or VRRP, which presents the two aggregation devices as a single router with virtual IP address 10.1.1.1. The address now configured as the DG for all host on this subnet, which are unaware of this trickery. Now AG 1 plays the role of 10.1.1.1, actively forwarding host packets. If AG 1 fails, AG 2 automatically takes over to maintain host connectivity. Nice. These redundant solutions have worked fine for many years, but there are issues with complexity, manageability and more. Now, Rooba has taken an early attempts at redundancy and brought them into the 21st century with a virtual switching framework or VSF, and virtual switching extension, or VSX. Like VRRP, VSF and VSX present multiple physical devices as a single virtual devices, but there's so much more. For example, consider AP 1. Dual home to the two physical switches, and normally there would be loops, perhaps requiring spanning tree protocols to mitigate. With VSF, the network is perceived to look like this, a single device with simple lag connections to other devices. Now you configure one device instead of two, and those configurations are much simpler. Single control plane, single management plane, single data plane. You get reduced IT staff overhead at the campus access layer. Now, VSX provides similar functionality adjusted for unique aggregation layer objectives. There are far fewer switches here, so having a single management control data plane is less important. But with so many access devices relying on this connectivity, uptime is vital. Separate planes mean that you can elegantly slide everyone over to AG 2 while you upgrade the firmware and reboot AG 1, and then vice versa. Nice, full OS upgrades with zero downtime. When you look more closely at this, VSX switch is used completely separate management planes, and nearly separate control planes except for certain layer to control functions, while the data plane is converged. There's a big advantage here. Other solutions often use a shared control and management plane, which can lead to a shared fate. If a calamity falls upon one switch, it can affect the other. That's not good for the required aggregation layer up times. You still get the simplicity benefits of unified management with synchronized configurations and easier troubleshooting. The VSX pair is inherently perceived as a single layer three device, so there's no need for VRRP. We have distributed lags between access and distribution, so there's no need for STP, plus the higher end aggregation devices like Aruba 6400 and Aruba AT 400 series switches, support in chassis is redundancy features as well. I'd like to end up here with a little bonus discussion about centralized versus local wireless switching. It's a key concept for campus deployments. Way back in the '90s when I started doing wireless deployments, most access points or APs were autonomous stand-alone devices. If you had 1,000 APs, each was configured and managed as a distinct network device, pretty cumbersome. Each AP act as a translational bridge accepting layer to wireless frames, converting them into layer to Ethernet frames, and then switching them on to the wired network on toward their ultimate destination. Autonomous local frame switching. Now then the industry moved to controller based Wi-Fi solutions, which centralized many management and control plane functions. Now instead of having to managed a 1,000 APs, you manage one or more controllers which then manage the APs for you. The benefits for resilience, radio frequency or RF management, scalability and efficiency were tremendous. But still APs is typically continue to locally forward frames. Now this works okay, but challenges remain. You need to control where that traffic can and cannot go, security. This means a set of access control lists or ACLs and firewall rules at all or most network devices between the AP and the ultimate destination for traffic, at least up to the internet edge under your control. See, your management and control planes are largely centralized at the mobility controller. But your security structures are all over the place. It's inefficient and error-prone in a way that can open potential attack vectors. Now solutions like those from Aruba alleviate these decentralized security issues, replace them with centralized access and policy control. Now instead of locally switching traffic, the APs tunnel Wi-Fi traffic to one or more centralized controllers. All this traffic can run through policy engines to control where traffic can go, how much bandwidth they can use, and more. Remember, this is all inside an encrypted hashed VPN tunnel, which minimizes potential attack vectors for things like reconnaissance and data theft. Now the controller then forwards traffic to the destination. The centralized policy and control eliminates those inefficiencies of decentralized access lists and tightens up your security stance. All this sounds nice, but let me play devil's advocate here. I worked with other solutions when I was first introduced to Aruba's, centralized switching paradigm, and it may be a little uncomfortable at first. I thought, "What about Sub-optimal pathing?" Well, most paths are naturally optimized anyway. When that wireless PC accesses local apps and services, traffic follows the same path as if it were locally switched. Remember, the tunnel is a virtual construct. The physical traffic flows up through the axis and aggregation switches then over to the servers, whether locally or centrally switched. Plus, Modern Network designs can handle the slight uptake and traffic, over thousands of deployments, it's just not a concern. Now, what about controller performance, scalability, and redundancy? These controllers are designed to handle the load and you can cluster multiple controllers for huge scalability and redundancy. But what if I do have a very sensitive device that requires absolute minimal delay and jitter? Well, you can still do local switching when needed for those supersensitive devices. That's it for the campus architecture section. I hope you come on back to watch the next part in this modern campus design series, all about Cloud and Edge Technologies.