Okay. Thanks. Today we're going to talk with Chong Kim, who is director of network architecture at Barefoot Networks, and has also previously worked on the Azure platform at Microsoft. Today, we're going to talk to Chong about P4. And various things going on with P4, which we're also going to learn about a bit in the course. Chong's been in on P4 almost since the very beginning. We'll get a chance to learn a lot about P4 today. >> Great. Thanks a lot for the invitation Nick. >> Yeah thanks for spending the time. I was wondering if you could talk just a little bit about, what P4 is, because I could describe it in my own words, but I think it would nice to hear it from you. How did it come about? Where did the idea come from and what problem is it trying to solve, and just where did it come from? >> Sure. So P4 is language. It's a domain specific language which networking users can use to describe the behavior of network forwarding plane, also known as data plane. So the main original paper was published about a year and a half ago at ACMCCR. And I was not one of the main authors but I started working with the authors earlier on. So that's why, and I fell in love with this language and the region that this language tries to achieve. And I thought that it was very plausible and then very impactful. That's why I joined this bandwagon. The main motivation behind P4 is that today if you look at old networking solutions, there is varying degree of programmability but all this programmability ends at the interface between control plane and data plane. Up to the control plane you have some visibility and some programmability. But once you go into the real data plane where individual packets are handled, it just becomes a black box. And you just have no idea what's going on and how to manipulate that, and how you can collect a lot of critical information for you. So, and of course you could try to use CPU or FPGA, which has been programmable since the inception. But you cannot get the right components target because switches have to emit some multi Terra VPS speeds, or some giga PPS or greater as well. And the industry has been sort of also changing, and then trying to sort of achieve these kind of programmable packet forwarding paradigm in terms of networking chip architecture. So these two, basically the desire from the networking users for programmability, and also networking chip industries recognition that hey, we may actually be able to come up with programmable solution without, or almost minimal components in size penalty. So these two things meet together at this point, and to marry these two trends, you need a domain-specific, high level language that users can use to program these new types of devices. That's where P4 came about. >> I see. So yeah, in the original paper, you talk about P4 being able to program a wide variety of these general purpose programmable hardware, things for programmable networking hardware. It seems like there's a wide range of possible hardware targets, should we say, including things like FPGAs, which may not really fit into high performance multi-chip architecture. So what do you see as the main place where P4 can ultimately be used. Do you view it as something that should be used for more general-purpose targets or do you think that it's really going to end up being focused on high-performance, multi-chip architectures? >> Well the language itself is actually designed intentionally to be neutral to be able to cover a variety of targets, and but what that means is that you probably have to choose a common denominator approach. Because if you think about CPU and MPJ or MPU, for example, they are very flexible. Especially CPU. You can do almost anything and the networking is just one of the numerous applications. So, you might just ask hey, I can program my new software switch using C and C++, of course. But that doesn't give the right abstraction, or the network engineers, right? Maybe that's a good enough abstraction for developers or core developers. But network engineers who have domain specific knowledge about networking protocols and operations, cannot speak that language very easily, and then deliver the new networking data plan in a timely fashion. And that's why if you want to come up with domain specific networking language, targeting a variety of targets, you have to choose a common denominator approach, and then P4 has chosen that approach. And to be a little more specific, that's why in P4 language you don't see something like pointers. Because pointer operation if you think about it is very expensive because you have to access the memory multiple times. Then you only have a few hard to collect cycles to handle these packets at multi TPS pipeline. P4 also doesn't have app extraction or language for say resurging, or floating point, or loops. It does allow cycles when you define a parse graph, but it doesn't allow looping, especially when you define the match action pipeline, because you can't hold this packet and then revisit this packet multiple times. So those things are deliberately not included in the language spec. But again, LPGAs and NPUs, and CPUs, and even GPUs are flexible enough. So in terms of how they were featured they operate a superset anyway. So they can support P4 very easily. >> I see. So, do you want it to compile a P4 program to an FPGA, there's kind of no problem, right? Because the facilities of an FPGA sort of offer a super set of functions, compared to like Xpress. >> Right. >> I see. Cool. So I've also noticed a lot of progress in the P4 language and consortium over the past year, particularly even in the past few months. I think the students in the course have noticed that things are happening. And it seems like a lot of companies are interested, including Cisco, which I think is quite remarkable. And what do you think are the companies' main interests in P4, like do you think that they're developing their own compilers for their own specialized hardware? Are they developing their own targets to compile P4 to, and why do you think they're interested in P4? >> I think they're very much interested in this work. And in fact, there was quite a few representatives at the P4 meetings recently. And one of the executives from Cisco actually joined the panel and then shared his opinion about P4. And legally what he says is that P4 is really big deal. It means a lot of opportunities opening up for Cisco. Not just for Cisco but for almost the entire industry. And then he actually gave a few reasons. And so that may actually, if I may rephrase those summaries. Or his points. It might be actually helpful for this interview. >> Absolutely. >> Yeah. So obviously there are a few low-hanging fruits. For example. Individual system vendors can differentiate themselves at the forwarding plane behavioral level, rather than just control plane software. Right now, if you're actually familiar with what is happening in the, especially data centers style or the enterprise style market, the chip that is predominantly used. There are only one or two chips that are used very widely. Although you have a large number of systems delivered to the end customers the actual data plane behavior is almost identical and nobody is able to distinguish themselves from others easily. And also P4 or software based data plane description means that the system vendors can move very fast at the software speed rather than the hardware speed. And if you're familiar with hardware development lifecycles, it's at least three years. So whereas in software, things can move a lot faster, at least an order magnitude faster. And the vendors can also come up with different devices, more customized to our target consumers or maybe some sub-industries. For example a device for financial industry, a device for medical industry, a device for large enterprises or data centers. Right now they're using almost the same chip, but if you look at their data plane requirements they're actually very different, for example, large data centers, they don't use multi-cast. Instead they want a lot of routing tables and they want a lot of echoes and so on. Whereas financial industries, actually unicast is almost a marginal application for them. They want multicast, a lot of replication but done extremely fast. They want a lot a visibility in the data plane and so on. So how would you meet these kind of different requirements fast enough? So that's software's benefit. And finally, another thing that came up at the latest meeting is that the, actually, a few large companies had, they tried to develop programmable networking chips by themselves. They might actually be working on it right now, their chips and their compilers and optimization tools and so on. But based on some other experiences, what they say is that the moment you have programmable solutions, hardware bugs simply go away. The reason being that the programmable chips programmable data plane components mean that individual forwarding elements like MAC action abstraction units and these kinds of things they're the same. They're a fundable resource. And you just have a large number of those fundable resources in the hardware. And so even if one of them, or some of them has some bugs or hardware bugs, you can very easily mask them at the software level. Because it's not that you have only one module doing one particular function. You can realize this function say, IPv4 forwarding on different parts of the hardware anyway. Because hardware is the same. So this allows software to cope with a lot of problems very easily and even financially this means a lot of big benefits to the systems vendors. Because right now, when they deliver solutions that happen to have some defect at the hardware level, they have to replace those devices in the field. And then that means a lot of money, really. They have to compensate for the disruption and those kinds of things as well. Whereas, if you can patch those kind of problems rather easily by software release, it's just so easy and cheaper for them. >> That's interesting, yeah. So rather than basically having to recall a bunch of faulty hardware, or go through a whole hardware development cycle, you could mask some of these faults. >> Exactly. >> Very interesting. And I guess the other thing that you mentioned was basically the opportunity to take, like, a fixed resource and do specialization. Like, for particular use case, like, in a data center or a financial network. The same set of resources might be repurposed. but one thing or another depending on whether you need lots and lots of tables or whether or not you need multicast. >> Exactly. >> Yeah that's interesting. You've listed a couple of use cases, I guess you might say. You know specific needs in data centers and in financial networks. Now, I guess from what you're saying the way I could interpret that is a network operator. Someone who is basically either designing a data center or is designing a financial network might basically have some flexibility in terms of how they design that. And everything down to packet formats and using P4. Who do you think think the kind of main users will be of P4? Do you expect it would be, for example, network operator, an architect, in a financial institution who basically designs the network now with this new relaxed constraint? That they can have custom data formats and packet processing pipelines, or do you think it's going to be a switch vendor that basically does this and packages it up and sells it to the institution? Where do you think that line will be? >> That's a great question. So configuring or describing data plane can actually mean various things. It's almost a wide spectrum. On one extreme end, there are these super users like data center networking teams. Who actually can introduce a completely new protocol with completely different types of pipeline, working in totally different fashion. For example, I'm going to do, source routing everywhere in my data center. I didn't even want to use IPv4 format because it's my playground. I can do whatever I want. On the other extreme end, there are typical networking operators in sort of small enterprises or campus networks. They don't really need to use drastically different networking protocols or data planes. They just want some flexibility in the data plane, for example, adjusting table sizes. When you started building your network you expected to have this many subnets, but it's growing. So you, you're just wondering whether I could have a little bigger routing table because I'm not using a large MAC forwarding table for example. And then, of course in between these two extremes there's various points, so this So, now let's start with this extreme end, where you actually want to do a lot of new things. I do agree with you that the system's vendors or network device developers, they will be definitely consuming P4, first time, because they have to deliver new devices with new features very soon. And also, the network architects in very large companies like online web scale service providers such as Google, Microsoft, Facebook, Amazon. They will definitely start using P4 because they have. They basically want to build a very well-optimized network, especially data center network and inter-data center backbone meeting their particular customers or application needs very well. And then gradually more and more people, especially when they feel that, yeah, data playing programming especially, or maybe even field reprogramming is doable, and I have done that, and I feel comfortable about that now. Then they may try to do this kind of programming more often. And also, who knows? You know, there could be some third party concerting services, which are doing this kind of data optimizations for small to mid enterprises, where they have natural operators, but they don't really know the full details needed to design new data plans. >> What do you think are some examples where you might want to do this in the field switch reconfiguration? You mentioned one, which I hadn't thought of, which I thought was pretty interesting where you said someones running out of memory for the IP or and they want to steal some memory away from the Mac table or something like that. That seems really, really interesting actually. I'd imagine that comes up quite a bit, and that seems to me like sort of a emergency use case. Do you think that there are others? I mean, do you think that are going to be a lot of instances where there's going to field reprogrammability or reconfiguration on the fly. Now you sort of look at the control playing and you kind of think about it sort of happening for lack of a better word, kind of in real time. Do you imagine any kinds of in the field reprogramability that's either frequent, or something that would happen on a high frequency? >> Well probably not that frequently. Especially in the beginning. It's not like reconfiguring your device with new echos or new routing protocols or new PGP neighbors, like it won't be that level. But that said, especially based on my experience at, say, Microsoft. When you build a large essential network, you're expected depression cycle of a network is typically three years. And within that three years, things do change significantly. For example, your policies can change. Your address and sign in policies, your routing policies, your access control policies. Qs policies do change, and when they change sometimes you find out that, oh I cannot meet these new policies easily with these fixed set of functionality, or this fixed data plan architecture. So you really want to revisit that once in a while, especially when you have new set of policies and new network architecture. And three years, actually, is very long, if you think about the way these web scale providers work. They typically plan only for about six months, and then, beyond that, they don't really know. It's not that they're incapable of doing it, it's just the nature of the work. Things change too fast, things grow too fast. And the types of application that become very popular change very quickly, so you have to be very agile, and we were actually hit by this kind of significant routing and address assignment policy changes several times over three years. And, right now, if you don't have any data pulling programmability. The only way of handling this is just going to your vendor and, hey, I have this urgent need. Can you just fit this into your next generation chip architecture, and then, negotiate with them, and it's a- >> It's going to cost you a lot of money, too, in the process. >> Yes. Exactly. Yeah. >> You have your back up against the wall, and you need this feature. >> Yes. >> And you need it now. And you're dependent on the vendors release cycle I guess. >> And that's why, if you think about it, that's why large data centers try to use more and more VM switch. Software switch capability. To cushion this kind of mismatch between policies and capability to realize those policies. because VM switches are malleable for them. >> Mm hm. >> But there's only so much you can do using only these both ends. Right? Individual hubs are still fixed, and sometimes you really need big changes in the individual hubs. >> Yeah. It seems like a lot of the architectures have kind of adapted to this sort of relatively slow-moving changes in switch capabilities by putting all of the flexibility and function at the host and at the edges. And then basically tunneling over the switches. >> Yeah. >> But maybe you see that changing, if the switches become a bit more agile. I mean maybe, maybe the pendulum will swing back. >> Well, it probably won't swing back entirely back to the original position because there are somethings that this end host virtual machine switches can do very well. For example, maintaining a large amount of connections state. >> That's right. >> Because it's sculptor. It's RAM. You can introduce almost millions or tens of millions connections state very easily. And then the integration between the sculptor switch and the locally centralized control plane is much easier. >> Mm hm. >> There are certainly benefits to that approach, but what I would imagine is that currently, they just view network as a black box. But if you can collect a lot of information from the individual hubs, for example link utilization, queueing latency, then you can actually introduce intelligence on both sides. And then basically choose the best approach for each need. So you could have hybrid approach or more natural eccentrical approaches for some types of applications. >> Yeah, in terms of use cases, I mean you mentioned that sort of reprogrammability and things like that to sort of make good use of resources. And that seems really compelling. But also those are sort of existing kinds of use cases and I'm wondering if, do you see, for example, with P4, that new kinds of use cases may be possible? For example, I've got a switch but suddenly I need this very specialized access control or firewalling capability and I don't want to buy a whole new firewall to do that so let me just put a little thing in my, in my pipeline. Do you see those kinds of use cases as possible, as well? >> Yeah, definitely. So, the- Especially, one of the particular network management activities that can benefit immediately and immensely by these programmable devices, network monitoring and analysis, or diagnostics, because right now there are only a few tools that you can use, some counters. And maybe a sampling based as Flow, and as you might know, NetFlow is there, but it's very expensive to build in merchant silicon. So no merchant silicone supports NetFlow right now, but with this basic sampling, basic mirroring, and some counters, debugging networking problem is a really painful process. Whereas imagine that you can define your own counters and your own custom mirroring mechanism or custom instrumentation mechanism to collect this per packet information from the individual hops, then you can think of doing a lot of exciting stuff. >> So you could gather more stuff than just NetFlow style statistics, you might do specialized kind of counters for specific participants or full PCAPs, or what kinds of things might be possible with it? >> For example, I'm not sure if you're familiar with this paper called Millions of Minions. It also goes by Tiny Packet Program, published in last year's Sigcomm. >> Mm-hm. >> So I worked with the Vimal and Mohammad on this project. So basically, the idea is, let's make each individual packet collect some useful information about that particular packet, while being forwarded, from every individual hub. For example, can I collect the switch IDs? Can I collect the input port ID, output port ID, so that I can actually enable layer 1 physical traceroute for every single packet? Why is this useful? It may sound very primitive and simple enough, but the value of this, especially in the context of large data center network where the pseudorandom spreading is used everywhere. >> Yeah. >> You have some particular problem to one particular connection or packet. How do you pinpoint that this packet got lost exactly there? Or this packet exercise this particular physical path and has this problem. How do you quote it like this too? Without that kind of fine grained monitoring capability. >> Yeah, I know, that's incredibly useful. I mean, yeah. You know, lots of stuff out there that sort of try to do layer 2 topology discovery and other things in enterprises as well. It's all kind of perfect at best, of sort of involves dumping bridge tables all the time and lots of things like that. So I could see pretty interesting use cases there in enterprise networks as well as the data center [INAUDIBLE]. >> Yep, yep. >> Yeah. What do you think, so we're talking a little bit about measurements and counters and things like that. And that's obviously just one type of state. What do you think about the idea of putting state into packet processing pipelines in general, and what kind of support does P4 have for that? I mean, other use cases might be things like stateful firewalls or QOS-related kinds of applications, like token bucket shapers, all these things require some amount of state in the pipeline somewhere. Do you see P4, the hardware onto which you would compile as being able to support those kinds of things? >> That's a great question. So P4 right now has a few language constructs that can be used to model this stateful object. So it has counters, counters are basically very primitive stateful memory. Right, you read some value, add by one, or by number of bytes of that packet, and then save it back. And then when you receive a packet going to that particular memory location next time, you build on the value that happens to be sitting there, and hence it's stateful, right. Counters, meters, as you said, are the examples of stateful operations, and then P4 defines them as embedded or built-in language constructs for a pipeline object. In addition to that, P4 also understands the need for sort of generic stateful memory. Instead of doing just counting or mirroring, you might want to do your own elaborate operations. >> Mm-hm. >> So we model that notion using something called register in the latest P4 spec. So it is there, but that said, it's especially an evolving area in the current P4 language spec, because If you go through that line of reasoning, you can quickly discover that stateful memory is a really rich set of functionality. Because the very basic notion is very simple. It's read, modify, update. Read some value from a location, modify it, and then save it back. Right, that's RMA, read, modify, update. But, when you do modify, what kind of operations are you going to allow? Are you going to allow almost an unbounded expressions, arithmetic expressions? Also, when you do modify, are you going to allow just modify, or are you going to allow test and modify. For example, test whether of this value is larger than or equal to something, and then modify. So depending upon what kind of semantics you allow, it can actually do a lot of interesting things. And we don't know how to model this in a really nice fashion, and also in a sort of target-oblivious fashion, and also useful for a lot of different targets as well. So that's a particular area where we need contributions. >> I see. So basically the lowest common denominator stuff that you were mentioning towards the beginning. Like, trying to support stateful operations for a variety of hardware, you need to basically build abstractions on whatever that lowest common denominator really is. >> Yeah. And another important thing about stateful thing is that, if you think about it, basically, almost all the buffer management algorithms, like scheduling and then, yeah, especially scheduling, is stateful operations. >> Mm-hm. >> Which queue are you going to serve next time? It's not just a stateless function, it's based on the history of your serving, of all the other queues. And hence, it's stateful. And we don't know how to model this nicely. And in fact, we probably don't have an entirely programmable scheduler yet in the hardware as well. So that's why P4 currently does not cover this queuing mechanism as a programmable target yet. P4 covers programmable parser and programmable match action units. Programmable deparser, but the shared buffer, the logical description of the behavior of buffering and scheduling, that's not covered by the P4 spec yet. But we would love to go there at some point in the future. >> And do you think that the main sort of hurdle for that now is trying to figure out what the What the model should be? >> Yes, that's one hurdle, absolutely. And the other hurdle is that we actually don't have very good programmable hardware that can support such abstraction as well. So we need support from both sides. >> I see. >> And also when I say we, it doesn't mean just barefoot. The industry doesn't know how to come up with this programmable offer and schedule yet. >> I see. >> Multi-tier bps, it's really demanding work. >> It seems like if that could be solved that there's just a tremendous opportunity for it. I mean, you spoke earlier of financial networks and others. I could imagine having really strict QoS requirements with really strong constraints on needing super low latency for specific flows. >> Yep. >> Interesting, so that seems really important. I wanted to talk a little about, just this sort mechanics of P4 as well, and in terms of compilation and what problems you see there. I know that the ONF has had this protocol independent forwarding working group, and there's some other sort of efforts, looking at independent, sorry, intermediate representations. And I'm a little confused myself just like, as to how to think about p4 vs intermediate representations. Like, you think it's correct to think of p4 more as like a high level language that people would specify these pipelines in. Or is it more of like an intermediate representation, something that a compiler might actually use to optimize the layout for a particular piece of hardware. Or is it somewhere between these things. What's the right way to think about where p4 sits there and how it might interact, like is an IR, does it interface to IRs, what's the big picture? >> I think the people in the p4 language consortium, or p4 community, usually tend to think that p4 as just high level language. That a user can directly use to describe their own data client behavior. And I also understand that why IR, Intermediate Representation, can also be useful for some optimization, or cross target optimization purposes. The reason why the p4 language consortium focuses on the language first is just purely pragmatic purposes, meaning that, at the end of the day users want to program this and then they need a language to program this. They cannot program the device using IR. They need a language first. >> Yep. >> Once you have a language, especially, it's very important to have a common industry wide language otherwise nobody will start moving. Start to move at all, if you have a totally fragmented word. Which languages should I learn? >> [LAUGH] >> I mean, this industry is just starting. And then if you're saying that hey, there's this language, this language, and different targets may have different languages. It doesn't help anybody. That the industry might not move at all. So that's why we believe that the common industry right language is very important, and that solving that problem is probably the first step. And then once we have that and then, we of course, we will have multiple targets. Then the importance of IR, especially as a vehicle to enable cross target optimization, and cross target portability may come next. Within a couple years or three years from now. >> That makes sense I guess it sort of like doesn't make sense to necessarily propose an IR before you have an agreement on what the high level language might be. >> Yep. >> For a stripe. >> Yes. >> You have the IR, may be useful for a cross target optimization. >> Yeah, exactly >> It sounds like you see those optimizations as probably happening in an IR rather than a compiler manipulating before itself to do those optimizations. >> Yeah, I think so. >> Cool. What do you think about, I mean you mentioned earlier, I thought it was really interesting and something I hadn't thought about. This idea that you might have buggy hardware that ships to the field, and without the programmability, you're kind of in trouble. [LAUGH] But of course, like a p4 specification could be buggy too. The big advantage there is if you've got a buggy specification you can of course, redo it and recompile and push it and you don't have to wait on the hardware development cycle. But you can still have bugs, so what's the sort of big story there, but also what's the current picture in terms of what kind of tools exist for verification and debugging. What things don't exist, but would be nice to have? >> So, I think we need an equivalent to almost everything that we have on the CPU side programming work. What's an equivalent of GDB? >> [CROSSTALK] That's what I was going to ask. Is there a GDB equivalent? >> Exactly. What's an equivalent of CoreDump? How can I analyze that? What's an equivalent to Profiler? And what's equivalent to static and dynamic verification tools and so on. I think it's a very rich area and a few of us, were working on this programmable targets as well as compilers, are trying to come up with some of this. But it's, yeah, we're just all starting right now. So I would just say that this is a huge, and research, and engineering area, where everybody can actually find a very interesting problem and then contribute. >> In your experience programming p4 so far, in terms of debugging, do you see common bugs show up already? I mean, what's the equivalent of like de-referencing an old pointer or something like that? >> Well, first of all, you don't have those nasty problems because we don't have pointers. >> Yeah, you have no memory, you said, so you can avoid those, but are there other sort of analogous kind of bugs? >> I think semantic bugs are probably hard to debug. The problem's in your p4 code itself, it's protocol specific. For example, you use in one part of your p4 program, you assume that this part will indicate this meaning. But in some other part of your p4 program, you were looking at some other fields. I see, so you parse something out of the packet assuming that basically this is a source IP address or a VLAN tag or something. And then later in the pipeline you do some operation like a write or a decrement or something, and your just doing the completely wrong part of the bits >> Yes. >> That makes sense. Interesting. >> And also one of the learnings that we've gotten while using p4 programs and then trying to come up with new applications using p4 programs, we realized that the exception handling part is weak. Especially in p4 spec. As well as the sum of the targets. So, for example when you receive unparceable packet how do you handle that? When the packets are actually going through this pipeline one of the fields has a totally unexpected value. And how do you handle that? So those boundary and exceptional case handling, that's usually a little bit tricky. And that's exactly where debugger and profiler like tools are very helpful. And in Barefoot we have our own set of solutions, at least not, it's not complete, but we have some debugging mechanism. We have some stepping into mechanisms, but, again, as I said some of these are very target specific because depending on which hardware you're working on, the hardware may actually have this support or not, so it's an interesting and a big problem, I'd say, in general. >> Yeah interesting, so that opens up kind of a whole line of possible research like you said. >> Yeah. >> Sort of verification and debugging. I like that way you put it. Every problem that basically we have thought about in software debugging, now suddenly you can sort of ask the analogous question. >> But let me share one thing, though. I don't want to scare off the potential P4 programmers because here is one lesson that I got. To come up with an almost data center style switch or a feature set retrieval into the other fixed function chips feature set, we had to write a reference P4 program, and then the size of that P4 program is just only about 4 to 5,000 lines. So it's not like you're debugging multi-million line C code developed by an army of developers, right. So you can [CROSSTALK] network domain specific backgrounds. Debugging, this is not sort of a Herculean test, I would say. It's an interesting and exciting test, especially if you have the right set of tools. >> What do you think are some other, I mean, just a final question. I mean, what do you think are some other, kind of, unanswered questions that P4 introduces? I mean, we talked, we've been talking a lot about, sort of compilation, optimization, debugging, verification. At the beginning, you talked about kind of new use cases. >> Yeah. >> I guess like for someone who kind of wants to think about research in P4, you mentioned measurement and monitoring as well, which I thought was a cool one, so anything else that you want to say in terms of things that are good places to get started and sort of think about where the big victories might be? >> Yeah, so one of the topics that come up open, especially these days in the language community is this thing called language architecture separation. So, the original P4 language assumed that the language is going to be used for switch style devices, but you don't necessarily need to use P4 only to build switch. You can use P4 to do any packet processing or come up with any packet processing devices like NICs, network interface cards, or some appliances, middle boxes. >> Middle boxes, yeah. >> Right. So and those devices or targets typically have different architecture than switches. So, but the current people respect, sort of combine these two very tightly, and it doesn't allow architectural expansion without introducing new language keywords, new language class words. It's usually very bad if your language keeps changing, especially for the compiler and debugger and those people. So to keep the language core static and fixed and small enough and yet allow expansion of this code, I think it may be very useful to decouple language and architecture. And so, that's a sort of very big problem and interesting problem on its own. So contributions on that side will be very helpful, and it can also have immediate impact to the people consortium. Register modeling is, again, an important topic, I think. Stakeholder memory modelling. >> Yep. >> Buffer, again, there are two big problems on the line there. How can you design a programmable hardware for that, as well as how would you model it? Yeah monitoring, debugging, diagnostics use cases, those are I think again very useful and a huge topic on its own. On the compilers side, this problem of build reconfigurability may introduce another interesting set of compiler problems. For example, when you, how do you know that this new P4 program is actually something that you can move from the existing P4 configuration seamlessly? How do you know that it's not? >> Yes, being this target, this compiled target, you're sort of like sliding out the old one and putting in a new one, and you've got controls coming down. >> Yes. >> How do you know you haven't broken the controls? >> Yeah. Sometimes you may be able to achieve this seamless migration, sometimes you might not. How would you know? It's target specific and program specific. And if you know that, yeah, it's doable and how, what's the migration process also has to be generated by the compiler. >> Right. >> So that's an interesting problem, in my opinion, because the existing CPU-based programming word, there is no such thing as field reprogrammability. >> Right. >> When you have a new program, you kill the existing program, and you just simply run the new program. >> Right, right. So, I'm not sure how much of the existing theories and learnings that you can borrow from there, and hence, it's an interesting problem to me. Yeah, it's definitely one that the, sort of, software community has looked at a bit, in terms of like seamless software upgrades and things like that, sort of security patches and other things like that. >> Yeah. >> Well, that seems very interesting. Cool, well thanks for your time, really appreciate it. I think the students will enjoy watching as well, and I guess in about a year or so, we'll do this again, and I guess we'll see what happens with P4 in the next year. I think it should be really exciting, so thanks. >> Thanks a lot for the opportunity. It's been fun! Thank you.