[SOUND] Okay, great. So, today we're going to talk with Larry Peterson, who is a professor Ameritus at Princeton University. He's also the the chief architect at the open network lab and he's on the faculty at University of Arizona now. so, we'll talk about NFE, it's relationship to SDN and open cloud, and, and all things related to that. So, thanks Larry. >> Before that. >> So, so yeah I guess now in, in particular in your position with, with the open networking lab you're doing a lot with respect to open cloud and I think it'll be kind of fun to talk about how, how, how you got there. But, maybe you should start a little bit talking first about, about NFE I think [UNKNOWN] when I started teaching the course NFE didn't you know, it barely existed not you know, basically not even enough to talk about it. But now, it's really taking off, so I think, can you talk a little bit about NFE, what are the exciting things going on there and, and how does it really relate to SDN? Sure, sure, so I, I think in NSD network function virtualization as the telco carrier operator world trying to figure out how to leverage cloud technology they buy purposeful filled hardware, hardware appliances to do all kinds of functions. And they're looking to move that in to virtual machines and take advantage of scaling those virtual machines. But out in the middle of their, their operator networks, not necessarily back in the data center. so, I mean one way to look at it is SDN is, about making the control playing programmable NFE is about making the data playing programmable. And that's all related to the cloud because the cloud is about making whatever the function is data plane, control plane, scalable. And so I really see entity as one, sort of one of three legs of cloud SDN and NFE. there, there's a certain starting set of functions people talk about. It's, it's exactly what they're buying appliances for at the edge right now. But I think if you, you kind of generalizing, it's hard to draw a line between what is a network function virtualization and what's a proxy and what was really a web service of a more general kind. so, so I did start, started hearing about it a couple of years ago at first. It was, So, the, the, just to trace a little bit of the history, I'm been involved with Pilot Lab now for about a decade and that was really about creating new functions of the, out in the network at the edge of the network, in virtual machines, essentially it was like a cloud except it wasn't locked into the data center it was out at the edges, and as, as a consequence of that. We've done some interesting functions, the research community did, but I was involved sort of in parallel with Planet Lab working on caching technology, well I and others at Princeton and that led to a CDN, called CoBlitz which, the, we then. Found that there was commercial interest for, for, and particular carriers network operators wanted to install their own caches because they had their own content to deliver, and, and they were delivering content on behalf of external aggregators like, like Acamide, and so they had reason to install their own caches. And this, this is in my view the beginning of NFD in the sense that. It was a point where, technology that grew up in let's call it the IT space as distinct from the telco space is really finding, is finding a reason for it in the edge of the telco space, and so there was a culture clash going on where at, at the edge of the network, people tend to think in terms of devices. And here was suddenly a function or a service implemented in software, run in virtual machines in fact. And so, I mean there's an interesting back story just about how you, distribute systems people look at the service Y behavior and Networking Teleco people look at device behavior. But in any case, that, that was the, the first one of these kinds of functions that I'm aware of that was actually running on a virtual machine, being deployed at the end, the edge of a network, at the edge of the Teleco network. And you know, that sort of opens the door. Well, what else could we do here? So, about now two years ago, and this is now literally to the origins of the term NFB. There was a, proof of concept pulled together by some people at BT. And first off they were interested in, how can, primarily they were interested, I think, in, in reducing costs but also in time-to-market. And so again this is, this is how do you take cloud technology which has been working for the Googles of the world, can we make it work for us, and, and moving it out to the edge of the network and, there was a, a proof of concept pull-together that was involved them. And, Intel, who works for it's own interest in general, good uses of, Intel processors everywhere. HP as an integrator, and the company then that ultimately came out of this CoBlitz exercise which was called Verivue at the time. It's since. Part of [UNKNOWN] that was an example of the first network function, we ran it side by side with [UNKNOWN] which is the subscriber importing for the BT network. And we ran blade servers and virtual machines and these two functions kind of located together. BT was interested that being much more than a, BT only endeavor. And so, did a lot of good work to bring into the community and to create what's now a a working group exploring a network function virtualization. But, the, the summary of the story is it, it was a, a cloud-ish scalable technology that was brought out to the edge of the network. And that really I think is, is what the tell codes are looking for. To, to you know to move away from hardware and into ritualized appliances. >> Are there, are there specific vendors of specific you know, pieces of hardware that like. I mean, obviously there are pieces of hardware that can be displaced by this technology, but is this the space where. There are completely new people moving in now, and serving the telco's needs. >> Yeah I, I think, I think that's the hope that there becomes a, you know, an app store for telco services. That will, that's not yet happened. it, it's a slightly different place than thinking about apps in the cloud because when you deploy an app in the cloud, you deploy your app and you manage it, and I deploy my app and I manage it or operate it. In the telcospace, the telco is the operator, they operator the services, so they want to pick up third party services, deploy them, then operate them. Then they of course don't want stovepiped operations because that's not going to scale very well. So, they're looking for a way to integrate. And so integration still becomes a pretty important thing here. And so, the obvious players are building service orchestration mechanisms. That's one of, you know, that will become you know, the, the Android or the IOS equivalent at some point. but, but what's happening is that the, the current hardware vendors in, you know, not necessarily talking about the Ciscos and Junipers, but you get people that build firewalls and do packet inspection and more narrowly defined functions. They're all under pressure, I think, to, to do, to redo the licensing so that you can license the software from them. And they're all, you know, they're all making that, that move right now, I think. >> So it, it seems to me, like I mean, I guess I'd always sort of thought about NFB in terms of just sort of like middle box orchestration more generally. I'd never really realized that it's actually just, I mean at least as you're describing it a pretty telco centric kind of a. >> Yeah. >> Do you think it's more broad than that, or do you think that's just where the urgings line, or. >> Well some, yeah. I think that, I think that's the origin, and I, I, and I, I, I'm, I don't think that's where the telcos were stocked, I think that's where their starting and. And, and to me, even a narrow definition of middle box and these network functions at the edge are still pretty similar. One's a little bit more enterprise oriented and maybe one's more consumer residential oriented. But they're still of the same sort, as, as I see it. I think where you start to do something more importantly different is when you start blurring the line between these things at the edge of the network and what's back in the data center. And now, now you start to intermix all of the services you might find at Amazon or Google, and a not clear demarcation point between what's. What is the over the top service offered from the data center that goes over the carrier's network versus what is more deeply integrated and embedded in the carrier's network. and, you know, it's clearly yet to be seen that the telecodes are going to, are going to compete with the, the cloud providers but I think they, because of their footprint they have an opportunity to do something of interest in there. >> Yeah, I mean that seems, that seems like a really interesting tug of war. I guess you know, having focused mainly on networking myself I sort of pay attention to the, the current tug of war with performance and, and, and provisioning as far as what's you know, between the cloud providers and the carriers. But I hadn't realized there was this tug of war between who should basically own or control different, different services in the network. Or I can see one particular example being like video transcoding. I mean, may, is that a real example? I mean, like >> I, I think video, video is, is, yeah. Yeah, video's clearly a, clearly a, one of the driving, I mean you could go back to the caching story, which is that's. It's a video that led to the need for cash in which lead to putting [COUGH] these virtual machines at the edge of the network in the first place. So that is, that is certainly one of the drivers, and, and you know, obviously you can't speak for the carriers, moving into the technical realm with a little bit where the market's going. I think where in if you start, is actually a fairly narrow place. Which is, I use to have a hardware appliance, it's now a vm. What do I do with a vm. Will I manage it the same way I manage my hardware appliances, which I cannot bump in wires and string them together. That I think is, a, a big mistake. if, if you want to really take advantage of the cloud, then you need to think about scalability, and you don't scale from the wire quite the way they're thinking about it. It's not, it, it shouldn't be, we're virtualizing a function and now I've got one of them I string together linearly, each one of those functions is in fact a, itself a scalable service, and so now. It becomes about service composition, each service independently scales to one to a 1000 virtual machines, those virtual machines some of them are located at the edge of the network and some are back in the data center depending on exactly what they're, how they're participating in the function. And these services interact with each other, and that interaction is itself much more complicated than a linear chip service stream, function streaming, or whatever they want, service streaming I guess it's called, or chaining, >> Right, right, yeah. I keep hearing this term too, it's service chaining, right. That's another thing that's sort of like come up recently I guess in the last year or two. >> And, and I think, I think, service chaining is the metaphor that you use when you still are truly embedded in the hardware world. And you haven't yet fully come to grips with the fact that these functions are running in the cloud. And. >> [CROSSTALK] The right way to think about is not, like a, a linear changing of services. But, like, thinking about different ways to compose. Or [CROSSTALK]. >> Right. So, service composition, I think is, is where this takes you. And there will be some cases that's fairly linear and sequential. But you move up the stack an inch and the interactions become much more complicated. It's not just pack it in, pack it out. I'm doing DNS requests. And I'm getting responses back. I'm doing HTTP requests. Some of which are cast, some of which are forwarded on. Go to a database to look up some metadata. I mean, it's a lot of interaction between these various, and I'm going to call them services instead of functions because that, and by that I mean, I'm just implying that the version's scaled in, in some number of virtual machines. Then, and once you do that, and this will bring it back to SD, and it's not just about the virtual machines who are running the function. It's about how they're connected together in the network. And once you move away from a simple service chain model where implicitly it's linear, then there are virtual networks involved and where are those virtual networks from? They came from the SDN control so I think these things all tied together, that the, the model that I believe is, is the right one at least as of now is, I start with, well, I'm going to call it a slice, because that's the language I used from Planet, we used from Planet Lab. But a slice on Planet Lab was just a set of virtual machines and, oh coincidentally they weren't connected by the internet. Now, a slice is a set of virtual machines and a set of one or more virtual networks. And by Virtual Network, I don't mean VPN I mean, the kind of Virtual Network [UNKNOWN]. And why do I need more than one Virtual network? Well that is actually really a critical part of what the, the operators [UNKNOWN]. If you go into the cloud, it might be sufficient to have a private network that my beings talk to each other on, and a public interface that I talk to my clients on, a subscribers list service. Well, when you embed this function or this service in a, in a, in a telecode, then there is a real strong desire for, adhering to the Principle of least privilege. And so, they are very, very careful about, this service can't talk to that service, but it can't talk to this other one. And then, this particular service doesn't have a public facing interface. And so now the virtual network's become a really critical tool in isolating functions as is the [UNKNOWN] cloud. But you also need to allow these virtual networks to compose with each other in some way if service A is going to be [UNKNOWN] to service B. And, that's really I think the new requirement on SDN that comes out of this. >> And that composition is going to look different than, extra nets or whatever we had in the past, where there was sort of like VPNs with sort of limited connectivity between each other. Do you think it's like just a totally different set of functions that we're going to need. >> Well, I think the jury's still out a little bit on that, but first of all they need to be composed so by definition virtual networks are isolated. >> Mm-hm. >> Now how do I have virtual machines on my virtual network talk to machines on your virtual network and talk to each other, well, there's different approaches to this, you, you could declare there to be virtual networks that multiple service that VN share. Sort of a common, a common virtual network. Another possibility that I think is kind of intriguing is if you're implementing you're virtual networks on top of a network hypervisor and I'm using that term as a, one of the layers in the SDN stack. Then that hypervisor is aware of the existence of multiple virtual networks. You sort of take a metaphor from virtual machine hypervisors, we sometimes talk about domain zero, as the, the, the super user extra privilege domain that sits adjacent to all the other virtual machines. Well, there's a domain zero equivalent in, in hyper, network hypervisor, world. And, one could use it to teleport from one virtual network to another. And then the interesting thing is, that's a real performance problem in server world, because I, you get the trampoline effect, I jump up into the main zero, and then I jump back down. I've got all these kernel processes going on. But if your network hypervisor's implemented on top of the physical network, and implements the virtual networks using Open Flow. Then it's a simple matter of getting the open flow rules right, so that data can move from one virtual network to another. So now I've, I've just gone down a bit of a rat hole there in terms of an implementation approach. But the, the, takeaway is virtual number of [INAUDIBLE], extremely powerful. I think they are going to be the, the cornerstone of composition between these services. If we understand them to give us isolation, I think, with a little work, we can also use them as a very controlled way of interconnecting services and the slices they run in. And that, I think, is going to be a critical differentiator with how network functions are deployed throughout networks as opposed to what we're currently doing with virtual machines in the data center. >> Mm-hm. >> Not. >> Maybe I'm just ignorant in all the ways virtual machines are being configured in the data center. But, I, I think there's a fundamentally different thing going on there because of composition. It seems like there's, there's maybe, maybe space for, for sort of developing interesting ways to control like what kinds of information can move between, between one virtual network and another. We don't really have those kinds of primitives, exactly. >> That's, that's right, that's right yeah. I mean that seems like a really interesting sort of security problem or something. >> Right. And, and now, right. So, there are some really interesting research problems that come out of this. So it, I think the way that I'm thinking about it is, I, I like the idea that services are, are first class objects in, in my system. You mentioned Open, OpenCloud. And that's kind of the cornerstone principle, is that everything's a, everything's a service is the, is the key idea there. And I, I want to be able to control my service. And I view the way my service interacts with your service. And certainly, they're both being mapped under the same underlying hardware. Our virtual machines may or may not be co-located, our virtual networks may or may not go through the, the same physical switches. But how you get that right is an optimization. There is an opportunity to do a nice job specification, and and then, it's, job of. Agents like this never hypervisor map on the physical machines and physical networks. And there, that's, that's where all the hard optimization problems will come in. I think the mistake a lot of people are doing is once you move away from drawing simple sequential bump in the wire service chaining, the next thing you do is you start drawing some completely impossible to parse diagrams with all of the flows of, of packets between virtual machines deployed in your, in your network. And there's no way any human could get that right. I think that you start with some simple extractions like services run in slices. Slices are a set of virtual machines kept by virtual networks. There is a problem of virtual machine placement. For my, it's completely server specific. Is it at the edge, is it at the data center, is it both? And virtual network topology, Is a big switch, is it a, is at a hierarchy, might be appropriate for something like a, a CDN. And then as I start interacting with other services, I may have to accommodate, like, you know, that may pose a perform performance behavior that I need to account for somehow and adjust. and, and, and, and there's the opportunity for some global optimization probably too. >> Yeah, it seems, I mean, I've, I've, I've just thought about it a little bit myself, I've just had a, had a pretty tough time just getting my head around some of the optimization problems. It seems like, I mean you brought up several things. I mean one is sort of security-related where, you know, you know, who should be able to talk to who, and, and what scenarios and, and for what flows. But I think another thing you mentioned is sort of, things that relate to traffic-load Right? I mean you could take sort of a you know simple low downs are an example if you say, okay I have these devices like close to the middle in that work. I have some sort of light weight ones close to the edge. And there's a load question. Another one that, that, that I have heard, that, that, they come up is you know, firewall placement. You know, like, like or other kinds of, you know, box placement where, okay, well if you move it close to the core, you've got to put more rules, if you move it closer to the edge, you can do, you can do other things. >> Okay. >> And it has implications for maintenance and [CROSSTALK]. Go ahead, I mean [CROSSTALK]. You've actually raised, you raised two interesting examples that. I think throw a really interesting angle on this problem, which is, load balancing and firewalls. Right, it just, it, it turns out that in different ways, you can implement those in, in OpenFlow-controlled switches. So suddenly, there's another element here which is, you got the virtual networks, you got the virtual machines, and you've got the controllers of the virtual networks. And so, I may construct a virtual network that does my load balancing for me by running an SDN application on that network, or a firewall is a sort of the [UNKNOWN] example of I program the network to. A [UNKNOWN] device. >> We were telling about the the, the sort of question of, of optimization questions that come up in terms of, like, placement, and I mentioned, like, a load balance example and a firewall example as two, two sort of cases of, it's not entirely clear where, where things are supposed to go, >> [UNKNOWN] my, my response was to the network. [LAUGH]. >> Those two examples were interesting because one can argue that it is the controller of the Virtual network that implements the [UNKNOWN] by installing all the right rules. in, into the flow tables. So, that, that is I think an interesting dimension to the problem, which is, just going back and sort of restating some things. As a service runs in a resource container that consists of a set up virtual machines where the code runs. It fills up. And you have to worry about where those machines are placed, at the entrance of the data center, connected by a set of networks. And again, there are certain, there are private networks that there will, private vertical networks they’ll be public facing virtual networks, but there will also be private virtual networks that are shared with other services because you need to compose with them. But each of those vertical networks is itself, ideally, a controlled program programmable network. And it is completely in the sense of s, SDN, so there could be a controller, so that's another place that's sim code is right. So how in that then leads to the, you know, interesting questions of what belongs in the virtual machines. And what belong in the controller of the virtual networks those virtual machines connect to. yeah. It's a, it's a, quite an interesting space to, to be thinking about. >> So, you, I guess to like, I mean we touched a little bit on the optimization in terms of like bouncing load, another sort, sort of thing that seems to come up, isolation and access control. Do you think there are some other sort of like big. Big questions that we need to be thinking about? Like you know, aside from just the isolation questions? You know, resource isolation. Load balance, >> Yeah. >> Positioning. Some other big. >> Well, one of my favorite clutch words right now is operationization. And this, this is again what the cloud brings us. It's a, it's a problem we have to deal with. We are, we are so used to thinking about, well, I've got the software. Now I'm done. I install it and I run it. But of course, running software on the cloud is an entirely different operation. It runs 24/7 and it scales. And, it's not just a passive thing it's an active thing that is always, that is always running. And there's a lot of things that go into operationalizing the service, even if, even if that service was built scalable. The soft-, let's say the software was scalable. Once I deploy it I have, I have to monitor it, and understand when it's necessary to, to re-size it or scale it up or down. There is generally some kind of node-balancing slash request riding that goes on with that, that it's not going to, its performance is not going to scale if I don't keep the load spread out on it. So load balancer a minute ago we were talking about it, if it could in fact be a function that runs on the, in the controller on [INAUDIBLE] network load balance, might also be a stand alone service because it's, it's now, directing requests not only to the right server, in a load balancing way, but to the right site where that server might be [INAUDIBLE] because virtual machines are potentially everywhere. so, I, I, I, actually think request, I, I call it request writing. You can call it mud balancing, but it's, I think it's one of those foundational services that other services are going to depend on, and I think there's a, a modest side of them that collectively help you operationalize your service. So that you don't have to start from scratch. I think that's the interesting challenge, which is can we build out that core set in a way that makes the next service that comes along easier to operationalize? That's kind of the, the idea behind open-platform as a service. We'll give you some tools and you just give us a little bit of code and we will. Auto scale and all of those things. But I think that they, those could go even further if we, if we really believe that everything is in fact a service that becomes your fundamental building block. >> When you, when you talk about optimization as well, another thing that, that, another question I'm having when thinking about these problems is like, what's the cost of placing a virtual machine. I mean it seems like in, in some limited discussions I've had about this, people are like, yeah, it's pretty much free, like, we can place virtual machines, like, anywhere in our network. Is that really the case? Like, when we think about, like, placing a function in the network, is it really free or something? It's like, oh, it's just as easy as just spinning up a virtual machine. Or is there more to think about there? >> Well, we do CDN demo where we clip a button and virtual machine spin up, but- >> All right. >> I think there actually is a, there is a cost to it. I think, we're, we're going to have to maybe refactor some things, it could very well be the case that virtual machines are, if they're, they're virtually free to have one. Then it becomes a question of merely assigning resources to it, and that was kind of the model that we had from [UNKNOWN] that I still like, but it's actually [UNKNOWN] a bit different than the way a lot of Cloud systems are built today. I'm most familiar with Open Stack where, you spit up a KBN virtual machine and you set how many resources it has and that's how many it has. And, you don't suddenly change that without tearing it down and standing up another one. In my view, virtual machines should be, and their existence should be decoupled from the resources that are assigned to them. But that's a refactoring that will have to happen. >> Hm. >> And if that were the case, then I think you probably, it's not so costly to have potential virtual machines is waiting for you to, to resource assignment to them. >> Right and then it's just a question of like, how many resources do you allocate to that particular [INAUDIBLE]? >> Yeah. Yeah. Yeah, it becomes res, it entirely becomes a resource allocation problem. >> Yeah. >> Which ought to be something that can be done in a very low, or a pretty low overhead. >> You mentioned a couple things there like, and, and, and, and the answer to that that I wanted you to briefly kind of like, touch on you, you, you, mentioned [INAUDIBLE] your work with Planet Lab and also Open Stack so I had a couple questions there. One is sort of like, what's, what's the relation, I mean how did, how did you come to working do it, to working on this from, from planet lab. And then the other is like, what's the relationship of, of Open Stack to NFV and also like in, in particular, what you're working on now with, with Open Cloud. maybe, what is Open Cloud? >> Yeah. I'd say I could [SOUND] going into the past a little, a long ways, I've always, going way, way back to when I was thinking about the, working on the external, it was always sort of the intersection of communication and computation. It's always, I've found it to be an interesting problem. And x kernel light work, we did a system called Scout, kind of took us in the direction that eventually meshed with active networks. And that was still about the same thing. But the, the, the disconnect between that work and planet lab a little bit was about was very much influenced by a, I believe it was a CTSB report, and, and written a national research report written back towards the beginning of time, I would have to go and look up. But it was drawing on the inventor's dilemma and destructive technology. And when applied to the internet, which at that time we starting running around saying the internet's ossifying. The internet's ossifying. The answer was overlays. And, and, so that was sort the aah. That's, you know that is the right bit of leverage. Let's just build overlays. And that kind of [UNKNOWN] started and we looked around at the timing and these servers were the best choice for that. And we s, you know, we had to write a lot of our own stuff because it didn't exist at the time, today you can go get these things. Practically anywhere. And so Open Stack is one source of, of software. It tends towards heavier weight virtualization, like KBM. But Limit's containers is today's version of, of [INAUDIBLE]. So we've been pursuing, we're trying to turn Limit's containers into a first class Virtualization technology, but on a container based session of technology. Rather than, other types, and you can answer to the earlier question about how cheap it is to start up a virtual machine. If it is a container based virtual machine, it is a very inexpensive. >> Right. >> And the over head for high beam at keeping it around is very loved. So I think containers are a very important part of the ecosystem. But, in any case, what we started to do was, think about re-engineering planet lab using modern day technology and softworld, and source software like Openstaff. We started down that path and almost concurrent with that and parallel, that was kind of what we were doing in my research-y side. And, at the same time, it's from that store, I was telling you earlier about, [UNKNOWN] becoming a product that we were selling [UNKNOWN] and they were starting to put in the middle of their networks. And so we've got this [UNKNOWN] experience about why we distributed [UNKNOWN] machines, meets today's technology for that, which is Open Stack. Influenced by the requirements that Operators have [UNKNOWN] style. And they all start to kind of come together and ultimately was sort of the genesis of cloud which you know, by one definition [UNKNOWN] but I think that's way, way under selling it. yes, they are virtual machines, and yes, they are lumped together in things called slices. But beyond that, we start with OpenCloud starts with OpenStack as the underlying virtualization, cluster virtual machine management layer. It also comes with a identity management, subsystem called Keystone. And Neutron is the network as a service layer of OpenStack. But what we've done, or are in the process of doing, is sliding in a new network hypervisor from OpenVirteX that comes out of the network, Open Networking Lab. So, then it's an OpenFlow-based hypervisor. It creates virtual networks, but you can assign a, a controller. It gives you control isolation, you can assign your own controller, your own network OS, to your virtual networks. That then, is the cornerstone of, the infrastructure of service aspects of Open Cloud, but I've, we've been, looked around and said, well these services are what it's all about. So it makes the service a first class object and, as you. You're allowed to add services and extend the open cloud operating system with new services. And there were very much inspired by Unix, right. As a Unix user, you start to sort of get where the line is between what was in the [UNKNOWN] and what was in. It was user commands libraries, and no, you know, what's in slash, slash bin when it really doesn't matter. And so we start with OpenStack, we add in a new network hypervisor technology and we, we have deployments that are going all the way from the data center to the wide area network. We add to the network. Open flow switches being controlled by the hypervisor, open stack clusters of each one of those, you know, in the network, the ends of the network in the data center. And then it's, then you're left with the orchestration problem which is what open cloud's about. >> Hm, it sounds like, I mean, it sounds like a great, en,environment to, to do some of the research and like to start working out some of the research problems you talked about. I mean, is it, I guess. One question I, I have there is it sounds great, can we use it? Like, is, is it ready? I guess what's the, what's the first thing you'd work on, you know, like, if it's ready to go? Because it seems like a great thing for people to jump in on. >> Well, like I said I was [INAUDIBLE] because that's an extremely interesting one. >> Mm-hm. >> But I don't think you can view that in isolation without also considering the topology the virtual network you used to approach them. >> And then that gets you [UNKNOWN] proposition that, that [UNKNOWN]. So I think [UNKNOWN] interesting [UNKNOWN]. >> and, the second thing which is sort of what we are already trying to do [UNKNOWN] optimization, trying to understand what it really means [UNKNOWN] that's supposed to be scalable [UNKNOWN]. And try to stand it up, using the open cloud abstractions as a multi-tenant 24/7 sustainable service, and what were the problems you had to deal with? And, there is, there's a secure bootstrapping problem you, you come into, there is request routing or some kind of load balancing, there's help monitoring. Certainly provisioning this part of it but we kind of had the, the provisioning aspect there, so, trying to understand what those are and then, and, and write those down and build services as well. I think there's, not to mention all the new interesting services people are going to be thinking about. So, Open, Opencloud's kind of, it's described kind of as a, as a research, infrastructure, a test web. I guess in, you know, that's a, not to draw too many parallels with Planet Lab, but much like Planet Lab was, was sort of a, a, a way for, for researches to do some real stuff and distribute systems. I, I'm wondering like, do you see it that way? Do you see it as a place for researchers to test out certain kinds of things? Do you see it as, and, Planet Lab also, by the way, had some pretty long running services, so like do you, where do you see, where do you think things going there. Do you think it's going to be a research playground, shall we say. Do you think it's going to be a host, you know, for some specific long running services, um-. >> I, I, really, really, really hope it becomes, a place for long running services. And we're trying, I mean, Planet, in retrospect one of the things Planet Lab didn't do, and I don't know that we could have at the time, was. Those interesting services up and running are going to nurture them and help them along, and offer them, make it more easy for users to get to. We tried to be a little too neutral and well, let's create an interesting [UNKNOWN] so you can go use his. But if we had allowed you to integrate that into [UNKNOWN] it appears as though it were a first class part of [UNKNOWN]. That would have been a good thing. That's not the only thing that would have booted the [INAUDIBLE] on the on going operation of it which is a whole nother matter. But yeah, I think the way that we're thinking about it, is it draws, if back, go back and read the original planolab white paper. You'll see something similar, except we didn't quite execute it in the way were trying to execute it this time, which is, we're trying to build an ecosystem. We, I'm sorry, we want Open Cloud to be part of the cloud ecosystem. And what that means, is that it's certainly a place where researchers come to, but it's also a place that industry comes to. And one of the things of course, you've seen technology move from research, there were a few examples of that in planet lab. We would like to see open source software that important to industry fold back into open tide and we're doing that. Open stack is the starting point that is critically tied to. It's traction in industry. And I think it's not just a showcase for research, but a showcase for any innovation. I mean, not just a showcase for the academic community but a showcase for those third party service vendors, that we're hoping will start to pop, crop up. And, you know, we're, we're partnered with Internet2 in a very interesting way. They're not just, well, we're using their Net2 resources. We have clusters of Internet2 routing centers, and I think there's interest within the Internet2 community to make this a showcase for. You know Internet 2 is supposed to be about mix generation internet. Well, the next generation internet isn't just about bytes and terabytes, it's also about functionality. And so a showcase for this kind of functionality whether you call it an F.B. or a plow ap or a blurred line between them, I think that's a real opportunity. >> Yeah so, the answer is all of the above - >> [LAUGH] Yeah, that seems interesting. I guess one theme one theme that seems to be coming up a lot like as if sort of, through Jesus course, for the second time this year, one of the biggest things that comes up, as a theme, like, whether we’re talking about new things like NFE or things like open daylight or, or other things, in this space. It’s that, you know, it’s, it’s, it’s certainly becoming coming clear that s- like when people talk about [INAUDIBLE] it’s certainly more than just -. You know, it's really more than just OpenFlow. But it's also maybe more than just N, right, in the sense that, like, it's more than just programming switches. So I guess, maybe, like, to, to, to a good place to wrap things up might be to talk about, like, where do you see this whole, you know, how you you see this whole area kind of, like, shaping SDN. because it seems like SDN as we knew it several years ago is much much broader than what is was. >> Yeah, [CROSSTALK] it really depends on your definition, SDMS is a key enabler here. >> Right. >> I don't think it's a means not an end, that's my own opinion. I think it's really about delivering services and functionality. >> But I guess like, to, I mean, to the, to the point of like, maybe, maybe I'll make the question a little more problem focused. Like when we think of solving network management problems or service, you know, problems involving services and networks like, you know, certainly as a network person I've always thought about like config, you know, using a controller to configure routers and switches and things like that. But it seems to me, like it's going to be harder and harder to think about problems without thinking about the network and the hosts >> That's right. That's right. And, and you sort of, I'm sensitive to this, sensitive to this, in which she's talking about the individual devices. So, here, here, here would be my wrap up, right. I think, I think we have to get to a place where we think about programming, a little differently. And, you know, computer science 101, we're, we're taught the distinction between the interface and the implementation. In this space. The interface is network wide. You can not think about the interface to an individual. That's a sub-routine call. You can't think about the interface to an individual devices, or the individual virtual machine instances. You're thinking about the interface to the network wide function. And there, and in SDN land, we call that a SDN controller. I would slightly generalize that and call that a service controller of which SDN is a specific example. it's, it is the interface to a network wide service. The implementation of that service is a set of instances whether they're virtual machines or physical routers, and yes, there will be a [INAUDIBLE] interface from. Service, from the service controller to the individual things. And certainly, the more you can coalesce them, so the better. But that's an implementation problem. Then you get into optimization, placement, and so on. The, the, the struggle we're having, I think, getting over the hump there is, we're so eager to optimize that we haven't quite got our heads around the abstraction yet. >> Mm-hm. >> And, I think once we do that, then there's tons of room for optimization. But those are optimization. >> Right, yeah, no it seems, it definitely seems like we are definitely in the world of like, you know plugins. [CROSSTALK]. [LAUGH]. >> Right here's the plug in to control the network switches. And here's the plug in to sort of [UNKNOWN] the amps, but there's no, as you put it like a network line of [UNKNOWN] or services. That's what we should be [INAUDIBLE]. >> Yeah, yeah, yeah, I mean, this is way over simplistic but-. >> Yeah. >> Interesting modular programming is a way to control complexity. I'm just claiming that the service as I just described it with the service controller interface is the modular of the cloud. And I think that, when you start there, it helps to separate the concerns in the right way. >> Cool. Yeah, there's, there's a ton of food for thought there. great. Thanks a lot, for, for your time. You know, I had a whole list of questions and I, I think we just like, you know, >> We danced all over. >> Took off, took off, took off the tip of the iceberg, I think, in this area. It's just probably going to be a lot more to talk about like, in, in the future as well. So this is, this is great. Thanks a lot. Yep, you're welcome, enjoyed it. [BLANK_AUDIO]