Welcome back to this final section of the monitored campus network design series. You learned about all these previous topics. Now we're ready to talk about network security. I want to focus on the evolution and function of key network security technologies and techniques, starting with perimeter security. Now, early in my network design career, businesses were largely contained within the walls of one or more office buildings with Internet access, of course, top-of-mind was the perimeter. We put robust redundant firewalls at HQ and a firewall at each remote office. To enhance security, we configure an encrypted, hashed and authorized virtual private network or VPN tunnel between locations. To lock things down a bit tighter, you segment users into separate layer two VLANs. With certain end-users on the red VLAN, security cameras on the VLAN blue and IP phones on yellow, and printers on green, for example. For typical wired LAN designs, the idea of creating separate VLANs has long been a great way to minimize overhead traffic and set yourself up for improved security. Now remember, broadcast unknown, Unicast and multicast or BUM traffic is flooded out all switch ports in the same VLAN. Fewer in points per VLAN means less Bum traffic overhead. Also know that a Layer three router defines the end of a broadcast domain, the end of an IP subnet. Thus, in points on VLAN red can only communicate with printers on green via a router. We can write access control list or ACLs to ensure that only employee VLANs can access the employee printers. Only security cameras on VLAN blue can access the security app running on the corporate server farm. Of course, the chain of security is only as strong as its weakest link. During several of my security consulting engagements, I uncovered a lack of employee training and physical security. But this design worked well into the 1990s and even into the early 2000s. But there were challenges. Those classic firewalls began showing signs of weakness. Traffic control is largely based on allowing certain source IP addresses and TCP or UDP ports to certain destination addresses and ports. This works okay for basic applications that use a single address port combination. But modern applications with microservices architecture aren't so simple. With multiple often dynamically selected address and port combinations, this address port paradigm falls short. Advanced intrusion prevention services, or IPS, was also limited, perhaps added on with a separate appliance or tacked on as an additional software service that might limit firewall performance. The same can be said of antivirus and any malware services and the ability to do advanced URL filtering. Thus, the advent of next-generation firewalls or NGFWs with the ability to intelligently analyze traffic and identify applications, even those that use multiple dynamically selected TCP and/or UDP ports. We can now tie application usage to the user identity regardless of location or device. All with the ability to tie into central Web-based access control mechanisms like Aruba clear pass, plus IDS, IPS capabilities are now an integral part of the industry's best firewalls without sacrificing performance. Again, the same holds true of high-end anti-virus, anti-malware mitigation with web proxy services, and advanced URL filtering. This all serves to greatly simplify policy management based around business objectives. Then there are the other classic security weaknesses I mentioned, VLANs and access control list or ACLs. They're not going away completely, but the way we use them has greatly improved. Consider VLANs, large campuses have hundreds of access layer switches and each must be configured with several VLANs, and this configuration is static. You might devise some standard guidelines to have more uniformity in your configuration. On a 48 port switch, you might reserve ports 45 through 48 for up links to aggregation layer switches, and maybe ports 35 through 44 are reserved for connectivity to access points. Now, this leaves ports one through 34 to be assigned VLANs for various devices. So maybe you sign ports one through 25 for PCs and laptops. Some ports are reserved for the blue VLAN for security and surveillance. Some ports are in the yellow VLAN for IP phones, and some ports are configured for the green VLAN for printers. Well of course, this must be a loose, flexible scheme that won't always work. Some areas need more APs while others need more PCs, but have no need for security cameras. But at least you have some attempt at organization, at least at first. But inevitably, networks grow, needs change. Perhaps more users want wireless, so you need fewer red VLAN ports. The IP phone system goes company-wide and now you need more yellow VLAN ports. You must manually go reconfigure ports for the new needs, and soon your nice tidy standard is big, messy and chaotic. Even worse, this entire system is quite error-prone. Somebody from the building security team patches in a security camera or IoT device to a port in the red VLAN. Now this inherently unsecure IoT device is attached to the red VLAN with access to all internal employee applications. Well, of course, the IoT device doesn't work because only the blue IoT security VLAN can access IoT applications. More egregious, some bad actor might compromise that endpoint and use it as a launch point for malware, denial-of-service or reconnaissance. What about access control lists or ACLs? You need to create a matrix of which sources can access which destinations, associate that with the current IP addressing and application port scheme, and then convert those business needs into a long list of permit and deny statements. These things can turn into monsters over time. Someone adds a set of permit and deny statements to allow a certain users access to new application, next year that application is decommissioned and a new application put in place. Some engineer is troubleshooting and adds lines to an ACL and neglects to remove the statements. Maybe the wireless network has expanded and now formerly wired devices are now wireless with new IP addressing. The new addresses are added to the list, but nobody takes the time to clean up the old unused statements in the ACLs. Nobody completely documented ACL usage and needs. After a few years, you have a tangle of permitting deny statements, individually managed ACLs spread across 40 or 50 aggregation layer switches and routers. I've had to untangle quite a few of these messes over the years, and it takes patience and time, often with some after-hours trial and error sessions. It's not pretty. So with all these cumbersome issues, with firewall rules VLANs and ACLs, what's the solution? It has to do with authentication. Let's take a look at a typical scenario; you have endpoints that must connect to an access switch or AP to gain appropriate network access. For tight security, these endpoints often run a supplicant software that enables the use of the 802.1x network authentication protocol, and for wireless in conjunction with an extensible authentication protocol or EAP. In this context, the switch or AP is called the authenticator, the device that controls initial network access. Now in many solutions, the AP forwards these authentication messages to its mobility controller or MC, which acts as the authenticator. It is as if the authenticator says," Should I grant access to this device, I'll ask the authentication server, which is responsible for making the decision to grant or deny access." The switch or AP slash MC harvest credentials carried in the 802.1x message, creates a radius message, and sends it to the radius server. Now radius is both an authentication service running on some server, and it is a protocol used to communicate between authenticator and authentication server. Used specifically for End User Access to network services and applications. To control administrative access to the network components themselves, switches, routers, controllers, a TACACS+ services often used. Now, the radius server might have a local store of usernames, passwords, and group memberships, especially in a smaller deployment. In this case, the server checks the datastore, validates credentials, and sends access granted or access denied messages to the authenticator, which in turn notifies the endpoint. However, for larger more scalable deployments, you have several authentication servers all accessing a common set of back-end servers that provide a common datastore. This is often a Microsoft server running Active Directory or AD. This sets you up for single sign-on, the same credentials you use to access AD is also used to access the network itself. Nice. Some deployments might use a Lightweight Directory Access Protocol or LDAP database, and you might also have a certificate authority or CA service for even tighter in point and end user authentication. This is a common Network Access Control or NAC Solution based around running AAA services on the authentication server. The three A's of the AAA service. First up is authentication. Who are you? The radius or tech server permits or denies access based on your credentials? Perhaps, your name is Maria with password secret 1, 2, 3. If those are valid, credentials, access is granted. There may also be some certificate authority or CA services involved here. Now that you're on the network, what can you do? That's Authorization. User Maria is a member of the marketing group, say, so she can access all standard business applications plus the marketing application in database. Other resources are off limits for Maria. We have controlled who can access the network and what they can do once they're there. Now we need accounting. What did they do? We have auditing, reporting, and tracking of user activity, when and from where they logged in, when they logged out, and what resources they accessed during that time. A one thing we're trying to eliminate here is plausible deniability. If the log file indicates that you access to an HR database at 2:00 AM, it's difficult to deny that fact. This is a fairly typical industry standard and acts solution. But not all next are created equal. In the case of the Aruba solution, one potential advantage is context. As client devices connect their profiled by the ClearPass Policy Manager, you have this rich context-based information of who is connecting, what they are using to connect, how they connected; wired, wireless or remotely, where they connected; HQ, home office or hotel, and when they connect. Now this is all sent to the firewall, which uses this to control which traffic is allowed and logs this information for future analytics and forensics. I think of the evolution here. Classic security was based around permit Jorge's PC to access the management database. Now we can say, permit Jorge to access the management database, but only from HQ between the hours of 9:00 AM and 5:00 PM, Monday through Friday, and only on a corporate laptop, never from a personal tablet or smartphone. Now later perhaps you migrate to a more robust endpoint management system and can change corporate policy to allow Jorge to access the database from certain personal devices. But it's even simpler than that. We are now far less concerned about static VLAN assignments and disparate ACLs. Instead, you can use a simple language of roles to define the rule bucket, which is set of users should fall into. Since Jorge is a member of the engineering team, he's assigned to the engineering role upon login. Now, whatever port he connects to can be dynamically assigned to the engineer's VLAN, and he is only allowed access to engineering appropriate resources. Plus, you can have this dynamic segmentation where both wired and wireless traffic is tunneled to a mobility controller. Our users have a consistent experience regardless of how they connect with centralized role-based enforcement. That concludes this last section of the modern campus network design video series. I hope you enjoyed this high-level overview of design concerns, technologies, techniques, and systems. Now if you'd like a bit of a deeper dive, check out the other videos in this collection. You might follow up with a modern campus management techniques course, for example, for a more robust treatment of network management.