All right and welcome back. My name is Tyler McMinn with Aruba, and this is our Aruba Cloud basics Part 1. We're continuing on with network management systems and orchestration. Looking at configuration management, automation, and orchestration tools to get a better understanding of how Aruba software and hardware works in conjunction with some of our Cloud services. Let's get started. A quick note here on some classic network management tools, such as Simple Network Management Protocol, Syslog, and individual device management. The idea of just logging in to a switch or an access point and adjusting the configuration through a secure shell or through some sort of browser. Syslog would be kind of a backup of logs generated by the device, and a simple network management protocol using products like airwaves or some third-party products that use that SNMP platform. Modern network management systems are going to use a security orchestration automation and response or what we call SOAR systems, or security information and event management or SIEM systems. These are often used interchangeably. You will also see a lot of software-defined orchestration, which relies on some scripting, so through the use of Python or some other scripting language. This all feeds into this idea of artificial intelligence or machine learning programming. That is a reactive way of dealing with events that occur on the edge of your network with that intelligence being programmed through some central orchestration or a cloud-based tool. With modern network management systems, SORE and SIEM, these are logging mechanisms like Syslog, in the previous section, or SNMP. Think about these dozens, hundreds, maybe even thousands of devices are all sending this various login information in various formats where Syslog packet info, SNMP traps, system status, and traffic statistics, user accounting, ACL logging, and device tracking, including deep packet inspection and even advanced monitoring through telemetry. When you're talking about wireless or devices as they're connecting to the edge of your network there. All this information is being fed into this system, so are you a administrator going to wade through dozens of log types from thousands of devices and try and derive something useful. Instead, what we do is you would deploy a security information and event management system, which can collect and ultimately aggregate, correlate and analyze this information in order for you to be able to perform post-event forensics or respond to alerts that may occur throughout your system. The challenge then really is just tuning this SIM system to tell what is a baseline normal response or behavior compared to what would be considered abnormal. To the topic of security orchestration, automation, and response or SOAR systems, the term SIEM and SOAR are used interchangeably. There are some distinctions. A SOAR enables you to collect and input monitoring by your security operations or your SecOps team, including information from the SIEM system and other technologies. This way you can orchestrate tedious and repetitive tasks as far as how you would want to respond. This adds your intelligence and policy to your workflow here. Therefore you have a more efficient response helping in case management, triage, and remediation. While SIEM intelligence is largely focused on tech info and artificial intelligence machine learning, SOAR is more a unified technology dealing with how the people or how you're policies, they're going to want to respond and what processes you would normally follow. Understand that this hard line between the two often blurs in the real world, where both may have some sort of artificial intelligence and machine learning capability that overlap. AI and machine learning, these modern capabilities are largely based on these two functions of AI and ML. The analytics are done using AI, which uses machine learning models to identify potential problems, outcomes, and risks, both proactively and reactively. These AI and machine learning systems are then used to identify best solutions to ensure continuous secure network operations and increase your overall situational awareness. These tools are available to help manage traditional network systems, are often cloud-based, and may be able to integrate solutions from various vendors. But it would be nice if most of this integration and deployment-like work was actually done for you. That brings us to modern network management solution tools or AI operations with machine learning models. Your data input comes in from devices that are connected to your network, edge devices, anything doing deep packet inspection, anything that can grab advanced monitoring, even Syslog, PacketInfo, SNMP traps could also be used. Especially within Aruba, we're trying to gather as much information from any possible Aruba device that we can. The amount of data available to the system is huge and AIOps is going to leverage all this data, as previously discussed. This is then fed into intelligent low-level models, which enrich the raw data and serve it up to the high-level models. The high-level models, the AI insights portion, use this or look at use cases that would analyze this information, identify root causes, have recommendations, and provide automated responses. Adding greatly to this power is anonymized peer data from thousands of solutions, support cases, other telemetry based sources. All of these fed into these models will vastly increase intelligence. In other words, if you are working with a hospital and it's mid-tier hospital with 100 beds or something like that, we have other customers and their metadata, their statistical information, their telemetry data, that can be compared to see at what rate these type of events would happen, and also allows for quicker OS and software enhancements. Taking a look at the AI engine next, we'll dive into this. But one classic example as sales engineer spoke of in one of my courses, was a high school where they were running this type of user behavior analytics, and a event happened where one of the laptops in the teacher's lounge or whatever, was going online at 02:00 in the morning and then uploading gigs and gigs of data. Now, didn't break any rules, didn't trigger any firewall responses or anything like that, but given that this was abnormal behavior, this is something that an insight was made and a proactive response was issued to say, "Hey, this event happened. This is something you should look at. This is the laptop, this is the time of day, these are the websites, this is the data." It was able to bring this to the attention of the school. The way we start with this is first looking at the telemetry where we're actually gathering this information. Well, in the case of the laptop, it was wireless, so we grabbed it off the the access point which was feeding it through a controller, which was feeding that through mobility conductor. That telemetry information was then being fed up into either air wave or in this case, a on-premise solution that's now been integrated into central. Data input here, millions of network devices, client devices, customer configs. This general-purpose built telemetry from Aruba devices continuously enhanced by Aruba domain knowledge and machine learning models, these models having been built by this huge customer base over the years. In other words, looking for your type of business or looking for your type of company, and then matching that to what normal behavior would be expected. Machine learning models then create baseline behaviors for many of the key performance indicators or KPIs, when modeling these key performance indicators in context of all environmental factors and configuring the machine learning algorithms that can predict, given, "This is what your last 14 weeks of behaviors look like, this is what we expect next week to look like." It's really just a matter of statistics and being able to have a large enough data set to where you can accurately produce within your single or double standard deviation, what you would expect your baseline to continue to work out to be. That ends up building these baseline models continuously retrained by the ever enhancing telemetry from newer access points, clients, and verticals. Modern network management solution tools in AIOps cases, you look at the data input that's coming in, environmental class, client's signal-to-noise, propagation loss, client device information, that gets fed into your artificial intelligent process here, which is using AI and modeling honed over thousands of installations. Thus the system derives optimal, minimum, and maximum transmit power settings for wireless access points. You get maximum Wi-Fi coverage per floor, building, and campus, and then optimize user experience. The environment will then automatically respond to changes in the environment that occur in order to maintain these optimizations. The cool thing is how flexible these modern management techniques can actually be, optimizing on a per band transmitted power level for a wireless LAN. Developers today have tools that give them the power to share information, interact with third-party applications, and implement these changes. Everything is done automatically. Python is often the go-to language when doing these scripting or when doing these interactions, and the way to access your devices and to access these services is through an open standard REST API and webhook communication pipeline. Those are essentially the fundamentals of building this automated intelligent edge solution, where you're taking the data from the edge of the network and tying it in with these AI processes and AI insights, hosted through something cloud-based, such as central. Looking at tools that have the power to share information between apps such as these, and integrating with third-party applications and devices, and then implement these changes, everything is done optimally. It would be done automatically. The Python language, the REST API, the webhook communication, those are all fundamentals to this. The bottom line is, all of these tools are used to integrate and share information between apps themselves. This enables endless possibilities. You can send interactive messages, automatically opening troubleshooting tickets, requesting firewall actions, controlling HVAC and lighting systems, interacting with IoT systems, and more. That's pretty cool. Here's an example of what this webhook looks like. If you've done any kind of data programming, I guess I would say this is an HTTP post message, essentially carrying information in a key-value pair sequence here in order to share this, which looks like JSON, if I'm not mistaken, between applications. A standard method of taking data, either XML or JSON is typically what's used and JSON, and it works pretty well with Python because they're very similar in the way you write Python dictionaries. But even without being a programmer, the essential idea here is that one machine wants to talk to another machine, and so they need a common language that they can both understand. What they're communicating is these keys, which, in a form, it would be like name, address, street, phone number, that's a key, and then the value of a particular instance. Each person that fills out this application, name, phone number, street, address, whatever, they would have their values that we get filled in. That's what you're seeing here is just a timestamp of an alert that's being posted with a description from one app to another. Then you would get a response saying, yeah, I got it, that was awesome. If app 1 is changing the behavior maintained by a third party system or whatever, it can receive a call via an HTTP post message back. This is all carried across the network securely using representational state transfer, our REST APIs. API again is just like an interface, like a plug that you can use with scripts or with any application really that supports REST to take advantage of whatever resources, that application or that machine or that server or whatever is providing as a service. In API, as described in the book here is a logical interface that defines app to app calls and requests. REST is just a common way of sending that information across the network, securely using SSL or TLS. It's a secure certificate-based way of doing that communication. I'm using HTTPS just like you would opening up a webpage. Similar to webpages, you actually have similar error codes like 404 or page not found, well, we have something similar in REST APIs as well. Or if everything's good, it's OK 200. Then the different calls that you can make, you can post information, you can delete information, you can get or read information, like when you get a google search result, that's a get API requests essentially that's being performed by your browser. The same thing here, except instead of a browser to a server, you just have one application to another or you have one device trying to get information from a machine or a device in your network, like a switch or router or an access point or a controller. One way to implement APIs is through this remote REpresentational State Transfer language. It's a stateless client-server architecture that allows things like any laptop or any mobile device that has a REST client, like a browser, can get information from any device that serves information with REST. As long as there's that secure connection over to this Aruba switch, for example, our Aruba CX Switches have amazing REST servers and have well-documented API through a swagger interface. Best of all, it's available and running out of the box for free. There's no license requirements. It basically, hold your hand if you want to do your own scripting and if you don't, it has its own network analytics engine that you could literally download scripts for free and run them locally here and then access information from it or feed information to it, if you want to post something or make changes or just look at stats or whatever. We'll talk a little bit about that, I think in the next set of videos. But yeah, it's very very simple to do. Just like open up a webpage. There's a URI request, then this is what it looks like, HTTPS and the address of the server, in this case, the switch, and what resource you're looking at. In this case, we're asking for the resource of VLAN information. If we do a GET request, we're saying, I would like to get a list of your VLANs. If I did a POST request, it would create a VLAN. If I did a DELETE request, then I could delete one of those VLANs, or if I wanted to change or add a description, I could do things like that as well. There's this well-known tree structure that is not unique to Aruba, this is all standard stuff. Not just a script that you would use on Aruba, but you could use that same script, the same exact commands on the majority of switches out there because these are all uniform in standardized, these uniform resource indicators. Instead of a URL like google.com or something, you have URIs, Uniform Resource Identifier, instead of locator. Python, I mentioned is probably one of the most well-known, it's an easy to use language. You don't need to compile it, and it's a high level programming language. One of the design goals of Python was readability of the code and so this extensive use of whitespace makes Python code look very clear. I should say, use of tabs and spaces. You know what I'm when I'm talking about, but it's an interpreted language. Interpreted language executes instructions directly, so you don't have to take your source code, then compile it, then run the executable. You can just run Python right on the box. One of the other benefits of Python is that it is expandable with custom made modules, and you can see that being taken advantage of quite a bit. Python builds REST API calls and Webhooks, push messages, and handles, the standard JSON content, the standard way of communicating information. This is an example of an actual Python script, where you're importing a library called requests that someone else already wrote. It's standard, it's easy to pull down, and this means that you can now do rest API calls, and so you would just put in the URL that you want to use. You would put the payload, and the payload, we're going to carry a username and password to login to whatever this 10.1.1.1 address is, probably, a switch, to look at vlans. Yeah, just basically printing the response app, very simple API call here that they're giving us an example of. Do you need to know Python? I would recommend it if there's one language that you want to learn in the world of networking or data management, or really about a million different ways of using this, Python's great. The ArubaOS-CX operating system, this is the new Aruba operating system that's been developed over the last few years, include this network analytics capability to host Python scripts and use open at REST APIs to take advantage of things like looking at your system health, monitoring your statistics, track state changes, and get insights into those changes to do root cause analytics, analyze traffic, discover anomalies, look at applications, Cloud or On-premise, and then ultimately, doing some network optimization. Remember, these are switches that try to run the core and the distribution, but they're also on the edge. As devices are plugging in, this may be the first device that they're actually running against, and to be able to have this visibility all the way to the edge with the ability to program to those switches and pull information off of that in an automated fashion, is a huge advantage. Orchestration is a huge piece as well. When we look at network management solutions and while these are pretty standard tools that have been around for awhile, I would highlight Ansible out of this list of network management tools that do Orchestration. These are services or servers that you can install. Some are free, some are paid, some are a little bit easier to use than others. Ansible is probably one of the better well-known versions of that hosted by Red Hat Linux and there is a free version of this, but it's been around a long time. It's very simple and easy to get into and improvise a very scaled automation tool, and there are modules that Aruba has written to make your use of Ansible even easier. If you look at scripting, way back to scripting, this gives you the ultimate ability to customize whatever you want to do. You're just limited by your ability to develop scripts. But I would say, even if you're not a programmer and I'm certainly not, having the ability to read scripts and at least have an understanding of what they can do and what their parameters are, is hugely viable, because there are plenty of people that can script that don't know networking and don't understand Cloud, but they know how to script. You could work with those types of developers and pretty easily do some very amazing things once the developer has an idea of what's going on. Just some general knowledge of scripting is highly recommended, and starting with Python is probably a good way to go. A lot of reusability, very fast, the fact that they can iterate as fast as that machine can pump it out uptime, so test changes against downstream dependencies to prevent unforeseen failures. This means testing as you go. Common workflows is to test and approve changes with rigor and speed. If you have a particular test that you want to push out, or you're trying to just look at your devices and how they're configured, or you're doing some troubleshooting, any changes you want to work into the workflow that you're testing as you go. This would help with reliability, automation, and compliance. Just a huge list of benefits when it comes to the scripting. I have yet to meet a network administrator is just like, thank goodness, I never knew how to script. My life would speed so fast. Who wants to work that quickly? Anyway, that is an overview of network management solutions. When we come back, we're going to take a look at Cloud security, which we'll wrap up our Part 1 set of videos.