A couple of different topology scenarios or application scenarios here. The one I keep alluding to is this baby steps, and that is I'm in NSX shop. I rolled out vRA. vRA is plugged into my infrastructure. First thing vRA is going to do is just collect all that data. Even if you haven't plugged it in to NSX manager, it's just plugged into that data. So, an application scenario here. This is one I keep alluding to, and this is those baby steps, right? So, if you're an NSX shop and a vRA shop, when these products come together the first thing I'm going to do is collect the inventory of all the artifacts that live in, for this example, my vCenter endpoints. Whether those are sourced or backed by an NSX or as traditional vSphere networks, they're all going to be collected. So, in this scenario, what you can see is, I'm just dragging and dropping existing or what we call external services run onto my canvas, that could be my security policies, my security groups. Well, all my networks, just drag and drop. There's a networking policy in there if I wanted to for IP address management or so on. The resulting application as complex as you want it to be can be very basic and just consumption-only mode. Now, of course, you can then take this and you can drag it on-demand load balancer on everything else existing if you want. You can have a hybrid of any of these and I'll show you a couple of those, but just for this basic design of an external networks, let me show you what that would look like in our unified canvas. So, let's take a look at what that use case looks like on our design canvas. So, from my networking and security category now, I'm going to drag and drop an existing vSphere network and select the associated network profile. Then I'll select each of these active components on my canvas and bind them to that network now that it's available on my canvas. Next from my networking and security category, I'm going to drag and drop an existing security group. I'll drag this over the database component and select the existing security group and in this case I have a make-believe database security group. I'll create that binding and hit Save. All right. So, that's the beginning. Right. So, we have this consumption-only mode and of course you don't have to choose one mode and move forward with that. You can have any one of these, but now let's take it up a notch. So, now, I'm going to do an application deployment with on-demand networking and security. In this case I've got any, I can either use on-demand networking exclusively. I could use existing networking with on-demand, and in this case I'm going to use on-demand routed networks. What that means is, I'm going to drag and drop that artifact onto my canvas. I'm going to wire these individual tiers. So, in this case I have a web tier and I have a database tier, those could be again existing networks or on-demand routed networks. I'm going to wire the individual VMs or machine nodes onto those networks, and I can modify the policy if I had access to do that if I wanted to. Now, I'm going to wrap each of the tiers with a security policy. So, this is again an existing security group or an on-demand. We can go through both options either existing or on-demand. But the point here is you can mix and match, but if we're doing on-demand networking and security, I've got on-demand routed networks, on-demand security groups. They're bound to the application and with those policies, I could even bring in an on-demand load balancer. With those policies, what I'm doing is at request time rather I'm requesting it once or dozen times, I'm automating the infrastructure components to make this topology real. So, from using on-demand routed of networks, what you get is a brand new logical wire or logical network for each tier, each one that was representative in the canvas, and that is automatically wired to an upstream logical router. We create a nick or an interface on a logical router. We address it and then we allow this traffic to flow through that edge device. It's pretty great and you're not touching any of those infrastructure things every time this machine is requested. So, then we're doing the same thing for security. If it's on-demand security, that's being provisioned on-demand. But the existing security policy is in use so that I'm just binding those security policies and automatically adding all these application components to the individual security groups bring it all together. Again, it's basically exactly what you would do manually but end-to-end automation. So at this point, we've gotten over just the huge impact that NSX has done to the environment itself and now we're consuming and automating it. So, with the security groups and security policies it is very important point to make because, let me build this out, this comes up a lot. So, I've got multiple tiers, any tier or any one component can have its own security group. The security group is a membership mechanism where whatever is bound to that security group now lives in it and then the policy just applies to everything. So, if I have a VM or in this case my web tier on security group, that security group is protecting that entire web tier. Okay. That security group will protect that web tier from any other tier even if it's in the same deployment, and it's all bound to that network. So, I've got my web tier, my app tier, my database tier here. I've got security groups. The security groups is my security policy. One thing that the vRA is designed to not do is give the app architect or the designer security policy privileges. Those policies belong to your InfoSec folks or your security architects and admins. That is built in the background and consumed in vRA. When I do an on-demand security group, one of the things I have to do in my canvas is say, okay, here are the available policies as defined by your security team, which ones do you want to use? So then I'll create a net new security group and make all the necessary membership bar modifications. Cool. Let's give you an idea of what that looks like. Now, I'm going to go down to the on-demand NAT network under the network and security category. I'm going to drag and drop in NAT network to my canvas, and select the appropriate NAT policy. I'm also going to drag and drop an on-demand security group. This time I'm going to select two existing security policy to add to that security group. All right, and while I'm at it, I'm going to go ahead and increase the maximum number of instances for my web tier for the next use case. Jumping over to the security tab, I will bind the on-demand security group to this component and then in my networking tab, I'm going to change the network binding to the new NAT network that I dragged and dropped onto the canvas. So, now third and final scenario, I'm going to share here of course we can do this all day, as the app deployment with app isolation. Now, this is huge, right. It is so easy and it is so huge. In fact, when I talk to customers I talked to them about the value of this, it's hard to avoid saying like, hey, just don't be that next company on the news. It's just been happening way too often and believe it or not. Back to this with anything other than how we are delivering this technology. Tick box and this application design could make the difference between showing up on the news or not. Let me explain. So, app isolation is- it is not entirely micro-segmentation, which you've probably heard a lot about, but it's certainly a critical component of it. In my topology stage, as I'm building an application, I use this example prior to the scenarios. If I'm deploying multiple applications, and let's just say I build a multi-tier application like this one, I got my database and Webtier. In most enterprises, if something is called a database, go to VLAN1 or VLAN10. Something's called a Webapp, go to VLAN20. If it's a DMZ Web, go to VLAN whatever whatever. Okay? That's great. Okay, fine. We've got our broadcast domain, we've got some layer of security across VLANs, probably very inefficient because physical unless annexes at play, but now let's take that same exact topology. We entitle it and publish it in our catalog, and we go and we provision that machine. It lands on an existing L2 network existing VLAN, and I'm good. But this is a test W's case, maybe. Now, I want to provision another 10 or another 50 of those, and I'm doing different code changes and modifications. Maybe I'm even incorporating release automation and a pipeline thing separate talk. But ultimately, what happened here is that I've got a security policy wrapping my Web tier and I've provisioned 50 of these things and there's nothing that says, "Hey, this other deployment over here that has the same exact policy, it's not protecting deployment 1- Web tier in deployment 1 is not inherently protected from Web tier in deployment 2." This is a huge infrastructure mistake or miscalculation that your traditional networks make. So what we can do, we do IDSs, IPSs, we can spend horrible amounts of money on east-west traffic controls, but ultimately, it should not be that way because first of all, ultimately, I might not even keep all of those tiers. So the the point here is what I wanna do is deploy as many applications or as many copies of a particular applications I want, and I want to immediately inherit east-west protection. So with app isolation, what's happening in the background, totally independent of the requester or even independent of the app architect who designed this thing other than hitting at tick box, is I am going to automatically provision an on-demand security group and create a security policy that blocks all traffic east-west across deployments. Okay? Except, if I specifically said, "Hey, in this deployment, I want it to be able to speak on this port east-west." Sure. But there's a default posture there, and that is the critical thing, no opportunity to get in there and just ruin your day. Okay? These things, these are the value drivers. This is that tick box, is just priceless. Okay. Now finally, another mention, because this is significant, is the on-demand load balancing. On-demand load balancing actually deploys an Edge Services Gateway automatically per application or per load balance tier and automatically provisions it and configures it based on the policy that's required by the app. In this latest version of vRA, we have granular controls that allow you to get very specific almost feature parity with what you can do in an annex itself, but here the app architect who's doing it. Of course, any of these policies can be modified as a day 2 operation. So, let me show you what that looks like. So, back in my Unified designer, I'm going to drag and drop the on-demand load balancer from my networking security category. Next, we're going to just select some attributes here, selecting the member which is my Web tier and the member network that I want to use, which is now the Nap network. I also use NAT networks for the VIP. Under virtual servers, I'm gonna select New, and here, you can see some of the new options that are available in 7.3. It gives me a lot of granularity. If I want to customize the virtual server, I can select any algorithm persistence and for the distribution and time out, tons of health check options and additional advanced options. For right now, I'm just going to use the default settings and create one for port 80 and port 443. Now that I've found that on-demand load balancer to my NAT network, let's go back to the NAT rules and I'm going to create a couple of NAT policies allowing HTTP and HTTPS into that Web tier, but this time, I'm going to do it via the on-demand load balancer. So, that was a port 80 and I'll do another example, this is just a sample on 443 for SSL and bound and hit Okay. There we have it, the addition of the on-demand load balancer onto the canvas. Now, one more network architecture that is that is probably one of the more popular ones now, of course, routed as popular, everyone's comfortable with it load-balancing, etc. On-demand NATted networks can be so significant and this is especially useful for large lab-based provisioning events or, again, test of- and this is the ability to actually deploy an entire NAT policy across any number of tiers and any number of applications as part of my application topology, and they're are on-demand. So, what happens here is I am creating a NAT policy by dragging and dropping an on-demand NATted network onto my canvas, I'm wiring it all up, and I can have any combination of networks and load balancers and such. Then I'm- I have the detailed capability to modify the policies in a lot of detail, and that's another what's new in 7.3 that we've added. Here, we could actually create the NAT policies down to which port you want to forward destination source and destination, which tier, and so on so forth, and all of this comes to life when I provision. Now, post provisioning, I need to modify that. What does that look like today? What does it look like in a physical world? Okay? It's not pretty. But if I want to do that, if I want to make a modification rather than do what I did in slide 2, which is open to help doesn't go through all that all that component, I can have a governed entitled day 2 action that allows me to go in there modify my NAT policy or, as I mentioned, modify my security policy. We want to have these granular controls, but only when the user, the consumer, the architect, whoever it is, is specifically allowed to have those controls. This isn't open it up for everybody, this is, yeah, you're building this application. You belong to this particular group. I trust that you are able to define your own NAT policy.