Jane Street
Tech Talks

October 16, 2017

Verifying Network Data Planes

Nate Foster
Jane Street

P4 is a new language for programming network data planes. The language provides domain-specific constructs for describing the input-output formats and functionality of packet-processing pipelines. Unfortunately P4 programs can go wrong in a variety of interesting and frustrating ways including reading uninitialized data, generating malformed packets, and failing to handle exceptions.

In this talk, I will present the design and implementation of p4v, a tool for verifying P4 programs. The tool is based on classic software verification techniques (due to Hoare, Dijkstra, Flanagan, Leino, etc.), but adds several important innovations: a novel mechanism for incorporating control-plane assumptions and domain-specific optimizations, both of which are needed to scale up to large programs. I will discuss our experiences applying p4v to a variety of real-world programs including switch.p4, a large program that implements the functionality of a conventional switch.

p4v is joint work with Bill Hallahan (Yale), JK Lee (Barefoot), Cole Schlesinger (Barefoot), Steffen Smolks (Cornell), Robert Soule (Barefoot and USI), and Han Wang (Barefoot).

Nate Foster

Nate Foster is an Associate Professor of Computer Science at Cornell University and a Researcher at Barefoot Networks. The goal of his research is to develop languages and tools that make it easy for programmers to build secure and reliable systems. He currently serves as chair of the P4 Language Consortium steering committee and as a member of the ACM SIGCOMM Symposium on SDN Research (SOSR) steering committee. He received a PhD in Computer Science from the University of Pennsylvania, an MPhil in History and Philosophy of Science from Cambridge University, and a BA in Computer Science from Williams College. His awards include an NSF CAREER award and a Sloan Fellowship.

Transcript

I want to introduce Nate. Nate is an old friend of a professor at Cornell and has done a ton of interesting work on the intersection of systems and programming. Done lots of things in the functional programming world in fact, lots of stuff he’s built over the years has been built in OCaml. But this work grows out of a bunch of interesting stuff in the world of software defined networks and today Nate is going to talk about taking some language called P4, a more expressive language for building devices that work in the context software networks and really using verification techniques to push that to the next level. Without further ado, Nate.

Thanks. It’s really fun to be here. As Ron said I’ve been a longtime OCaml programmer and I’m really grateful for all the work that goes on here and that’s funded by Jane Street to keep the language very active. Makes me happy because I love having great tools for my research. I have a picture and I want to ask how many people know what this is?

Is that ARPANET?

That is the original topology of ARPANET from the late 60s. If you’ve ever taken a networking class you might have read the paper on the left by David Clark who was one of the Chief architects of a lot of the protocols that we still use today for the internet. In that paper you read about what issues they were thinking about as they designed these protocols. It’s hard today because the internet has become so important in all modern computing systems and even our personal lives. Try to go back and think about what it would mean to build a network that connects the whole world. But in the late 60s when they were doing this work, the predominant paradigm was actually circuit switching.

You had the telephone networks and telegraph networks and it was this great shift to move to packet switch networks. It was also a huge challenge to do things like take multiple networks that might have different kinds of hardware, might have different conventions about what a packet is, how big it could be, how the bits are oriented. They designed this set of protocols that really, I think, impressively made all that work. The internet really is designed with this principle that nothing is really centralized and you have robustness as this organizing principle. But if you fast forward to today, networks have really grown quite a bit in size and complexity. The pictures on the right are pictures of the topology that Facebook uses in their data center. That’s a picture of inside of one of their data centers.

We expect networks to provide increasingly more and more performance and to do more sophisticated kinds of processing. This has become pretty difficult for network operators to actually manage correctly. You think about managing a network with thousands of switches running multiple protocols and if it ever goes down your boss is either yelling at you, or worst case like for Amazon you might even be in the headlines. This is quite a challenge. Despite the challenges of managing these large complex networks, network operators are really quite talented. They’ve come up with a set of practices that for the most part work really well. If you go talk to network operators at major tech companies, you’ll find that they’re doing things like they’re using high level policy languages to describe declaratively what they want the network to do and then they’re generating all the device configurations from that.

They’re doing various forms of dynamic monitoring, they’re doing things like scraping the configurations of all the devices using command line tools and then doing some analysis of those configurations to see if there might be a bug. Then they’re also using tools like ping and traceroute. These are old command line utilities that let you poke at the network by sending special packets. If you look at what these packets do you can often find out a lot about how the network is behaving. This is the state-of-the-art today. As I said, for the most part it works pretty well but every few years there are these, maybe every few months there are these high profile outages where something terrible happens in the network and you can look in the newspapers and you read these post-mortems of where there were some configuration error on some router and this led to some cascading sequence of failures that took down a data center.

If it was a public cloud provider then all these other services went down and this can lead to all loss of productivity and money and everything else. If I could put on my academic hat, there’s an obvious thing to do, which is to get away from these techniques which are well thought out and very pragmatic and try to adopt some of the techniques that we use in software. Maybe we could have tools that would let us verify properties of the network like, does it connect all the hosts? Does it provide some appropriate level of redundancy for fault tolerance? Does it correctly isolate segments of the network that shouldn’t be able to talk to each other? Is it free of forwarding loops? Forwarding loops or really bad in a network. Does it balance traffic amongst all the paths? Maybe we can even do this automatically. A question?

I’m just making a remark. That will just show that we are not as sufficiently decentralized as the original ARPANET wanted to be right? The modern internet is not as decentralized as the official design. [inaudible 00:06:15] right?

Right. I should say one thing that’s different about some of these modern networks is that while the internet still exists in it’s fully decentralized form there are many autonomous systems across the entire globe. The internet works without them having to really coordinate at all except they all run, say BGP, the routing protocol using the internet. There are also these very important networks that are owned by a single organization. Facebook runs this massive network including data centers, they have a backbone, a private backbone that they run that connects all their data centers, they have points of presence all over the world. The tools that were designed for the internet are not necessarily the right tools for the situation when you don’t have multiple competing organizations who need to cooperate to run this network.

Okay. Wouldn’t it be great if we could have tools that would verify or check all these properties for us automatically? There’s a unique opportunity I think to do this now, forgive my cheesy graphic, because there’s been this recent shift starting about 10 years ago where the networking industry has started to rethink how they build and operate networks. There’s things like OpenFlow and software defined networking that have become really prominent and for the first time in quite a while we have these abstractions and these APIs and languages that have a reasonably precise semantics. We can start to think about building these tools that would check properties in the network automatically. That’s what I’m going to set out to talk about today is a tool we built for verifying networks.

But before I do that I want to say one thing, just a word of caution. This is not so good. There’s, especially someone working in programming languages, there’s this natural view that we want to have software that’s perfect and we want to be able to build all of our software in a theorem prover and check that it works exactly as we specified. There’s lots of ways that story can go wrong. This is maybe my favorite. Here’s a bike where they’ve taken this very sturdy U-lock and then locked it to a cable lock that you could cut with a big set of clippers. I was thinking about this today actually because you probably heard there was a major exploit of, or major weakness in the protocol used to secure wireless access points. One of the things people were talking about on social media was well, hey, this was actually verified by a team at CMU and Stanford.

The proof that they did of this protocol is actually as far as I know, there are no flaws in this proof. They did their job and they formally verified it. But in the paper that’s going to appear in CCS in a few days some other researchers in Belgium showed that you can actually compromise the effective end-to-end security properties of the protocol because the model used for verification did not exactly match up with the realities of the world. I think this is just worth keeping in mind whenever you’re doing verification that the pragmatics of verification are pretty important. Being able to have a tool that doesn’t rely on having programmers start by writing down formal specifications, that kind of thing is important. Having a tool that’s connected to real implementations so you’re not verifying some idealized model of the system but actually verifying the code that runs in the system itself is another important thing to take into account.

That’s something that we’ve tried to design as we or try to do as we build this tool. I will tell you about a practical property-checking tool for programmable data planes that I’ve been working on for about six months with folks at Barefoot. My plan is I don’t know the background of everyone here, I was told there’d be a general CS audience so I’ll spend the first part of the talk giving you some background on software defined networks and the P4 language. I’ll do a quick mini crash course on program verification. Those of you who’ve seen this before it’ll be a little bit old hat and then at the last part of the talk I’ll tell you about our actual tool. Okay, what is SDN and P4? Let me start by going back to the traditional architecture that you see from the beginning of the internet up until about say 10 years ago.

Conventional networks they’re made up of devices, things like routers and switches and firewalls. Each of these devices actually has two separate pieces that do a different job. At the top shown in red you have what’s called the control plane. This is, think of it as a algorithm or a piece of software that runs on a general purpose processor and it does things like run protocols like BGP or OSPF that basically talk to other routers on the network and discover the structure of the network, compute paths through the network, manage all policy like security policies or different policies that govern how packets should or should not flow through the network. Then below that you have the blue box which is obscured by the red but that’s the data plane. The data plane is typically not a general purpose computing device, it’s some specialized computing device that can forward packets really fast.

It is programmable, you have some way to configure it to have different behaviors but the name of the game is speed. This thing needs to be able to move packets through the network as fast as possible. Its job is to take the directives that come from the control plane, maybe forward packets to this destination along this path and then realize that using some special purpose hardware. Software-defined networking was this new architecture for networks and this new approach to networking that actually has its roots and some ideas going back more than 20 years. But the modern realization of SDN came about starting about 10 years ago. There’s really two key ideas in SDN. The first idea is to try to generalize data plane functionality and come up with a standard set of packet processing constructs.

The idea here was that instead of having routers and firewalls and load balancers and content caches and all these different network devices that kind of look the same but do different things try to come up with a set of primitives that would let us build all of them. Maybe an X86 for packet processing. Then second, to actually split apart the data plane and control plane so that instead of having to have a control plane running on every different device you can have the control plane separate from the device, you can maybe just run it on a standard server so you could just have a machine running some C code or some Python code or some OCaml code and you can express your algorithm directly in those languages and then communicate the directives that you want to these programmable data planes. One thing that people often did, although it’s not essential is to actually centralize the control plane.

Instead of having one control plane instance for every device you have just one controller machine or maybe a cluster of controller machines that are managing a whole set of maybe all the data points in your network. This is interesting because it lets you think about how you express data plane algorithms differently. You don’t have to think about fully distributed protocols, you can start to think about just ordinary programs that process packets centrally. Okay, this was the traditional view of SDN. There’s one key flaw that made this idea not work as well as it was originally envisioned in practice. That’s that although if you go back and read the early papers they wanted to generalize the data plane functionality. The specific standards that the community developed and coalesced around we’re not actually exposing much data plane functionality.

If any of you have tried to use things like OpenFlow it’s very hard to go by a switch that actually implements the more complex versions of the OpenFlow standard well. In practice, you couldn’t actually program these data planes in very sophisticated ways. What happened is about three or four years ago a group of computer architects, hardware designers at Texas Instruments and at Stanford got together and said, “Well, what if we took that problem head on and built a device that was a data plane specific for SDN.” What they came up with after about three years of work was a new chip that is very fast. It’s actually one of the fastest switching chips you can buy today. It’s fully programmable in a domain specific language called P4.

Unlike the issues with OpenFlow that I described before, you can think of this as basically going one level deeper and giving you a language and a set of abstractions to express the data plane functionality you want. Let me show you what this looks like in cartoon form but still a bit more detail. This a view of the architecture that Barefoot’s chip is based on. It’s derived from an academic paper published at SIGCOMM 13 on an architecture called RMT. From the top level, the architecture looks like two pipelines glued around something called the Traffic Manager in the middle. The idea is that packets come in from the left, they get parsed, meaning that you extract out from the bits that are in the packet, which data you want to act on, which data you want to process, then there’s a pipeline of processing stages called the ingress pipeline.

Okay, I have animations here so let me, the packets gets parsed and then it moves through these ingress stages. Each of these stages has a bunch of memories and a bunch of Al use and a little bit of state that you can program like if you’re writing a C program over the parsed data in the packet. The packet flows through this ingress pipeline and then in the middle it hits something called the Traffic Manager which is a piece of hardware logic that does things like scheduling packets. Sometimes you want to have different packets, some packets might have low priority, some packets might have high priority, you might want to divide up the bandwidth you have on the device differently amongst different traffic flows. That’s what the Traffic Manager does.

Then because you might want to make some decisions that come after you’ve scheduled the packets, there’s a symmetric egress pipeline where again you can modify the header values. Then at the very last stage then there’s a deparser that basically does the inverse of parsing. It takes these structured header representations and then reforms a packet and sends it out on the wire. That’s the architecture view of these modern programmable data planes. You could imagine just writing machine level code for this device. If you prefer to do that good luck. But alongside the RMT architecture the same folks also proposed a domain specific language called P4 that’s designed to compile to this architecture and other architectures like FPGAs and software switches and other devices. P4 is basically a standard imperative language.

It has things like types you find in a language like C, it’s got standard control flow operators. It has a couple of domain specific features. It has some constructs for building these packet parsers. It also has a particular abstraction called a match-action table which is used to express the guts of the pipeline processing that happens. The main design principle behind P4 is that they wanted the performance of a P4 program to be predictable, at least when compiled to one of these RMT chips. The slogan is that you want to be able to do constant work in a constant amount of time. What that means for the language design is there’s no complex data types, nothing like a pointer. All the state is bounded and there’s no loops in the program. With these three ingredients you can convince yourself that a given program is going to have some maximum bound on how long it needs to process any packet.

That’s good because it leads to predictable performance. One thing we like to say at Barefoot about our chip is that if you write a program, your program may not actually compile, you can easily write a program that uses too much memory or exhausts otherwise the resources available on the chip. But if it does compile then you know that it’s going to run at line rate. There’s nothing variable or dynamic about the performance characteristics of the chip. The one key feature that I want to focus on the talk and it’s not something you find in standard languages say like C, or OCaml or Java is this idea of a match-action table. This is the bread and butter of the packet processing pipeline. Here I’ve shown just a simple declaration of an action and a table that’s used in a program that implements an ethernet switch. An action is just nothing more than a procedure. Just collects up a group of imperative operations.

These might be primitives, these might be other actions and lets you invoke them with one name. In this case this learn action is calling a primitive that generates a digest or summary of the packet and sends it up to the control plane so that it can make decisions about what to do with these packets. Then a table, I’ll draw it pictorially like that. A table expresses the schema of one of these routing tables that’s implemented internally with some of the memories and some of the ALUs on the chip. A table has to specify what values from the state of the program should be read by the table. For example, we might want to feed the Ethernet source address bits into this table. It specifies how those bits should be matched.

Here we’re just doing exact matching, meaning that when packets come in we’re going to take the entries in the table and look them up and only find a match if the bits are exactly the same. Then it specifies which of these actions or procedures can be run on packets that match in the table. In this case there’s just two, there’s this learn action and there’s a nop action that does nothing. We can also specify the things like the default action if a packet misses in the table or things like the size of the table and so on. When you’re running a P4 program you’re basically defining a control flow over lots of these tables and then these compile to some hardware resources that implement a table but, the key thing to remember is that this program you see I filled in here, some dummy values, this program doesn’t actually specify things like MAC address one, MAC address two and the fact that on packets with MAC address one the action learn should be applied.

The contents of the tables get filled in by the control plane. That’s not something that appears in the P4 program. That’ll be important as we keep going. I don’t want this to be a P4 tutorial so I’m not going to teach you line by line how to write P4 code, I’m going to try to keep it a little more abstract. But just to give you a macro view of what a P4 program looks like. Here’s a program you could write and it actually implements in just over around a few dozen lines of code. Most of the functionality you’d find on an Ethernet switch. What the program consists of is some type declarations. We declare the type of Ethernet headers, we also declare some additional metadata that this program is going to use internally to process packets. Then we define a packet parser. In this case, the parser is trivial. It’s just going to populate the Ethernet header that’s declared above.

Then we define the actions. These are the primitive manipulations on packets that we’re going to use in our overall pipeline. We define the tables. Then the main for our program is these controls. Here, I just have two controls, one for the ingress pipeline and one for the egress pipeline and these just do things like apply the tables in some order and I can also do things like conditionals or other control flow if I only want to apply certain tables under certain conditions. That’s what a P4 program looks like. A lot of what people are doing with P4 is actually maybe nothing so exotic but just having a way to describe in a standard programming language with a standard semantics what these different devices do, how do they process packets? I won’t talk about this so much today but there’s something else you can do with P4 and I’ll just point you to a paper that is being published at SMSP, the operating systems conference I think next week.

If you have this programmable architecture and you have a domain specific language, you can do more than just write, re-implement standard protocols. One thing that we did last winter is to actually build a cache that runs on Barefoot’s chip. You can think of this as something you might put in front of a bunch of say Memcached servers and what the P4 program does is it parses the requests that clients for a simple key value store make, and then it uses the memories on the switch to actually cache the values in the key value store. You can use this to accelerate the caching process because when there’s a hit the switch can just return the answer right away and doesn’t even have to go to a server. That’s just one example of a more exotic thing you could do using the P4 language and one of these programmable chips. Okay.

Obviously, maybe I’ve convinced you but obviously I think making switches more programmable is a good thing. We’re opening up this layer of network systems that hasn’t really been accessible to ordinary developers for a long time. But of course, it also opens up lots of new possibilities for errors. If you want to do things like verify network wide properties like we’ve talked about before you need to understand at a fairly low level of detail what each device is doing. And if each devices’ behavior is specified in P4 it might be doing more complicated or even crazy and bogus things. For example, the errors that can arise in P4 programs are P4 has a type system and it has this notion of a header which is basically a struct. But headers may or may not be valid. We’ll see examples of this in a little bit.

What that means is it’s much like how in Java an object can either, if you have a variable whose type is some object type, that variable might actually contain an object or might contain null. You think of headers as the same thing. They can actually contain the bits in the header or they can be invalid. What happens in P4 if you read or write an invalid header is that the value you get back is actually undefined. I’m personally not so happy with that part of the languages’ design because in general undefined values let alone undefined behavior is a bad thing and we’ve seen this through years and years of exploits involving undefined behavior in languages like C and C++.

But the reason that this part of the language design was done that way is that dynamically checking if a header is valid can incur some cost and this is the usual argument for unsafe features and people felt like requiring safety was going to be too heavy a cost. There’s lots of other errors that can arise in P4 programs. There’s a notion of an exception but what happens if the exceptions aren’t handled? There’s this packet metadata I showed you a little piece of it for the Ethernet switch but in general packets can accumulate other state as they’re being processed and there’s some special state that’s used to control how the packet gets scheduled, or how it gets output. If you use that metadata improperly you can end up again with incorrect or even undefined behaviors. Another low level error you can make is you can write the wrong parsers and deparsers.

You can imagine running a program that parses packets one way and then deparses them a different way. Now as packets go through your pipeline you’re producing malformed packets. It’s obviously not good. Something else that is, there’s all these standard control flow/reachability properties that you might be familiar with from general languages. Well, the same thing in P4 is also important. If you write a program that maybe has an access control and you want every packet to go through that access control table, it’s easy to screw things up and have some packets not go through the access control table so you might not actually be implementing the security policy you thought you were. Let me show you a quick example and then I’ll do a little demo of our tool. This is a picture of the parser for a program called switch.p4.

This is an open source program that the P4 language consortium maintains that attempts to implement all the features of a modern switch that you might buy from a vendor like Cisco or Huawei or Juniper. This is the state machine view of the parser and it’s probably too small to see in the back but you can see it’s a big beast. It’s got, I don’t know, roughly 50 states and it’s got a fairly intricate transition structure. Because this parser has to handle every different kind of packet. Ethernet is definitely there but the packet could have VLANs or not have VLANs, or it could have a stack of VLANs. It could be ipv4 or IPv6. It could have different tunneling formats and so this parser takes all of those different protocols and parses packets and produces then headers depending on what kind ofpacket you have.

Obviously you’ll notice some pass through this parser that cause some of the header variables in the program to not be valid. If my packet does not have a VLAN tag then I’m going to go through the parser never having populated that data and now if I try to access it in my program I can end up with an undefined value and then from that undefined behavior. P4’s type system doesn’t really give you a good way to precisely document when headers are valid and when they’re not so you can end up with programs that do the wrong thing. I thought I’d show you a quick demo of our tool and then tell you a bit about how it works. I need to, let’s see, actually mirror my screen so I can see what I’m doing. Okay. Is that visible in the back? It’s big enough? Good. This is a P4 program that’s a little bit more complicated than the switch I showed you before.

I’ll step through it quickly like I did for that program. It implements not an Ethernet switch but in this case a layer three router for ipv4. I’ll just show you the code and explain what’s going on but I won’t pronounce every single line. First we have some declarations of the header types. Ethernet and ipv4. They’re just basically giving all the bit lengths that you can find in various standards. Then we have the parser and interspersed with this parser is also declarations of the global state of the program. There is a header whose type is Ethernet T and whose name is Ethernet. That’s now a global variable you can manipulate in your program. What the parse Ethernet state rather does which is the entry point of the parser is it extracts this header. What extract does is it takes the bits that are in the packet sitting buffered on the wire and then it fills in these fields with those bits.

If a packet is too short then we get an exception. Then what this parser does next is it branches on the type field of the Ethernet packet and then if that type is a certain value, 800, then we know that the next header to come in the packet is an IP packet so we go off and call parse ipv4. If it’s not, then we have a packet that is some other network level protocol layer three protocol and we just jump to the ingress pipeline. Already you can see there’s two different types of packets that could be going through this program. There’s just an Ethernet packet and there’s an Ethernet followed by an ipv4. I’ll skip forward a little bit faster. Now we have ipv4, again, a global variable to store the ipv4 header. There’s some stuff involving checksums, which I’ll just skip over but there’s features in P4 that let you do things like calculate checksums which are of course important to detect for malformed packets and so on.

Here’s the parser for the ipv4 header. Then we have the actions and tables that make up the heart of the program. Let me not go through this because it’ll take quite a while. But let me just jump now to the bottom. What these things are doing things like matching on the destination IP address in the packet and then setting the MAC addresses to be the MAC addresses of the switch instead of from the host. That’s how a router normally works. That’s what all this is doing. Okay. Now here we have the main procedures for our program, these two controls. You can think of these controls as stringing together different pipelines of table applications under certain conditions. What the ingress control is doing here is it’s testing if the ipv4 time to live field is greater than zero and if it is then it applies a table that looks up the destination address and then forwards the packet out the corresponding port.

Trust me, there’s two tables that do that. Then on the egress side again we unconditionally just invoke a send frame table that just rewrite some of the MAC addresses. Okay. The logic of this program is super important. But what I want to show you is how we can use this tool whose mechanism I’ll explain to find bugs. The first thing I’m going to do is just, let’s see, this was simple router, is just run the tool on the program as it is and you’ll see that it’s parsing the program and converting in a different form. In this case, I haven’t actually asked the tool to check any properties so verification trivially succeeds. Now, let me throw in a flag -val and show you what this does, I’ll just show you what it does. What -val does is it basically takes P4 from being an unsafe language where accessing header values gives you undefined values to being a safe language.

Because it’s going to insist that every single access of a header will be able to statically show that it’s valid. If I do this then verification fails. One thing we can do is we can also ask the tool to give us a counter example. What it does is now it fails to verify the program and then it constructs a trace through our program with a sequence of parser states that got run and the values of the packet that the parser populated those headers with and then the sequence of tables and actions that got executed up until the point where some assertion in our program failed. In this case, the assertion that failed was the ipv4 header checksum was not valid when accessed. We can go back in and sure enough there’s this piece of code here that I’m not going to explain in full detail but what this says is basically verify the IP if it were checksum in the parser and then update it in the deparser.

The problem is that there’s two different packets in this program. There’s the Ethernet only packets and the ones that have Ethernet and ipv4. If we have an Ethernet only packet there’s some code that gets run here that tries to manipulate bits that don’t actually exist. The way we can fix this is to add a little conditional and now these procedures only get run if that header is valid. Let’s go back and try to verify this. Okay. That’s not good. Now, we got a different counter example now. Verification still failed, same packet containing all zeros and now we see it’s trying to execute the ipv4 LPM table. Let’s go back and look where that happens. In our control pipeline, it’s right here that the ipv4 LPM table got run and we can see that we actually ran that thing and read a value when the packet may or may not have an ipv4 header that’s valid.

We can fix this by doing a similar guard here. Sorry. Now, verification succeeds. We’ve basically just hardened our program by adding some dynamic checks and now every control class through this program only accesses valid headers. Let’s do one more little example. Another property that is important to check is that as the bits in these headers and metadata are flowing through the program there’s some special metadata. It’s called the standard metadata that determines how the packet is actually forwarded. In particular, packets are supposed to set the output port or drop the packet. You should always do one of those two things. But if you go and read the P4 spec carefully, the field that represents the directive that says whether to drop the packet or send it out a particular port is actually undefined initially.

The value could be anything, could be the value that the last packet had, or it could be random bits that the device chooses to put there. If we go back to our program, there is a piece of code that sets the egress specification, so this either forwards the packet or drops it. In this case, we forward it and that happens in this ipv4 LPM table but the property that we might want to check is that at the end of the ingress pipeline no matter what packet we got when we parsed, no matter what happened in Ingress pipeline at that point the Traffic Manager needs to know what the egress spec is. That thing should always have a defined value at this point. I guess it’s obvious to see but in the interest of maintaining suspense let’s just see what happens.

Okay, now we can try to verify and we fail and what we see is that the assertion that failed in this case, this is an assertion was added automatically by our tool says, “We reached the end of the ingress with an undefined standard metadata egress spec.” We’ll go through the trace but the problem is that you can see that the Ingress is a nop in the case where the packet is not a valid ipv4 packet or where the TTL is not zero. We can fix this. Have a little table to type this in. We can add a table, it’s a dumb table, but it just unconditionally drops non ipv4 packets. It doesn’t read anything so it just always matches everything and it just runs the drop action. If we just add a else branch to this conditional that always applies drop non ipv4, now verification succeeds. Okay. That’s a little demo of how the tool works on a simple almost a toy program.

But actually this router you can use it to do real stuff. Okay. I want to spend a few minutes talking about program verification and the long history that we’re building on to build this tool. How many of you have seen hoare logic or predicate transformers before? Okay, not that many. Great. This section will hopefully make sense for you. Our approach is building on some very old work going back to Tony Hoare in the late 60s and Dijkstra in the mid 70s on how you can build and reason about programs using logic. The approach I’m going to tell you about is actually, we’re using a particularly simple form of it because P4 programs are kind of simple. But this approach is actually what underpins pretty much all modern program verifiers including things like Facebook has an infer tool that’s based on something called separation logic.

There’s a group at Uber that’s been working on doing program analysis. It’s also based on this essential approach. Even if you’re not interested in P4 I think you’ll learn something from this. The way our tool works is shown here on the left. What we do is we take the P4 program and we essentially desugar into a language that Dijkstra introduced in this 1975 paper called guarded commands. This is basically an idealized imperative language. Then from the guarded commands we’re able to generate a first order logic formula that has the property that if this formula is valid, meaning it is true under any possible model or any possible interpretation of the symbols in it, then we know that none of the assertions or other properties that might be in our code will ever fail. We’ve verified that our program is correct.

The way you can think about this approach is that we’re basically interested in proving what are called relational properties of programs. People use this funny Greek symbol to describe this reasoning. The way I think of this is I have some program given by let me just call it a single Command C, to be the main invocation for your C program. Then I have a precondition that captures assumptions about the state of the computation before I start running C. This might be the variable X is not negative, or the variable Y has the value 17. Then I have a post condition and this describes the state of the system after my program executes. When I write this funny double turnstile, what that means in the program verification literature is that this precondition and this program and this post condition go well together in the sense that, sorry, I changed my symbols to Greek but left the ASCII here. If my precondition holds that should be a psi.

If the command executes and it halts, then the post condition phi holds. Okay. I will show you just a little bit about how you can check these properties automatically. This is actually the language that Dijkstra proposed in that original paper, minus loops, I’m going to leave off loops because P4 doesn’t have loops. The way this language is structured is, there’s some language of expressions, I’m not going to worry too much what those are, but it will include things like integer constants and variables and addition and multiplication and other things. Then I have first order logic formulas over equality constraints and possibly other relations on my expressions. This is totally standard, I have conjunction, disjunction implication, negation quantifiers. This lets me write down interesting predicates over the state of my program as revealed through these variables.

Then my programs are just the simple imperative programs. I have two ways that the formulas can get in my program. I assume you’re all familiar with asserts because that’s something that appears in a lot of languages, you can just assert something at a point in your program and if the formula does not hold till that point then you get some exception, exactly the same here. I also have assume and the way to think about assume is it’s like an assert. I can write down a formula and if the execution ever hits that assume and the formula is not true, then it’s as if that execution never happened, or the execution was into an infinite loop, or however you want to think about it. But basically, if I write this down I’m saying executions that violate this property I’m not going to consider when I’m verifying programs. If you assume false that’s making a very strong assumption.

But if you assign X to three and then assume that X is positive, that assumption is fine. Then we just have assignment, sequential composition and then a funny form that’s sometimes called select. What this is, is a non deterministic choice. This executes C1 or C2 non deterministically. You might wonder this doesn’t really look like a standard imperative language but you can encode standard constructs like conditionals in terms of the select. To encode the conditional if B then C1 or else C2, we would just assume B on the left branch and then run C1 and choose assume not B on the right. Since assume discards executions that don’t satisfy the predicate this will just run the branch where the assumption is valid or satisfied rather. Okay. We write our program in this guarded command language then the question is, how can we verify these properties?

It turns out there’s this elegant technique for automatically generating formulas that are sufficient to show that these relational properties hold. This is often called predicate transformer semantics or weakest preconditions. The idea is I start out with a program, encode it to one of these commands, and I have a post condition phi. If you don’t have a post condition that you’ve written down think of just as being true. Then what I do is I have this syntactic translation I’m going to which is going to produce for me another formula. The weakest precondition of C and phi. What then the formula produces will be such that the triple consisting of the weakest precondition, the command and the post condition is going to be valid. In other words, if we ever start in a program state that satisfies the weakest precondition and we run C and C halts then we end up in a state satisfying phi.

This is the definition of the translation that does that for you. I’ll just step through them one by one. If you’ve seen this, this will be old hat if you haven’t, this probably will go too fast. But just to give you a sense that this is a simple syntactic translation. For assumptions what we do is we’re assuming that some psi holds and our post condition is phi so this just becomes an implication. Phi implies psi. For assertions, we’re asserting that psi definitely holds and our post condition’s phi. Instead of the implication we can join these. It needs to be the case that phi and psi hold. For composition, we just compose the weakest preconditions. For assignment, we just take the post condition and substitute the expression that we’re assigning for X. Then for choice, it has to hold on both sides.

The weakest precondition is the conjunction of the weakest preconditions for the two sides independently. Okay. This is a classic technique and it turns out that if you do things this way, you can end up with preconditions that have exponential size. That’s a problem if you don’t have an automated verification tool because it would take a long time to solve the formulas. But there’s a really beautiful technique due to Saxe and Flanagan from about 20 years ago, where they came up with a slightly more complicated but equivalent way of producing these weakest preconditions that it’s polynomial in the worst case, I think it’s actually quadratic or a quartic. That’s what our tool uses is this more efficient version due to Saxe and Flanagan. Okay, now, if we’ve computed the weakest precondition, to finish the verification task what we need to do is to check if this formula is valid.

We can do that using SAT and SMT solvers. This is nice duality between validity which means that a formula always holds under any interpretation and satisfiability which says that the formula is true under some particular interpretation, that there is some model of the formula. We want to check validity but we have a tool that can check satisfiability and we can just negate the formula. We ask if the negation of the formula is satisfiable and if this is satisfiable then we have a counterexample to validity. The backend of our tool is we use the Z3 SMT solver developed at Microsoft Research, we take the negation of phi, we add to Z3 and Z3 either says yes, the formula is unsatisfiable, in which case, we’re happy because verification succeeds, or it says no it’s not and it actually builds a model of the negation of phi so we can then probe that model till we generate these counterexample traces.

Okay. That’s the general way that all these program verifiers work and ours is no different. The one thing that’s different from most program verifiers in P4 is that a P4 program is really only half a program. It describes the schema and data flow of one of these packet processing devices but the contents of the tables are completely unknown. One thing you could do is you could basically make pessimistic assumptions and say, well, the control plane could really do anything, you put any rules it wants in these tables. I’ll try to verify my properties without under all possible conditions. But in practice, there are often unstated assumptions in these P4 programs about how the table’s going to be programmed. To make this approach practical we need some way to document assumptions about how the control plane is going to fill in these tables.

Here’s a little example that illustrates that. It’s a silly program but this is real world code that I extracted from that giant switched up P4 code. This is the code that is supposed to take singly tagged or doubly tagged VLAN packets and produce an untagged packet. This is a transformation on packets that happens usually at the last hop before the packet goes to a host. The programmer here has nicely set up a couple of actions. For singly tagged packets, it just copies the ether type from the nested tunnel header and puts it into the Ethernet frame and then removes the VLAN header index zero. If it’s doubly tagged it basically does the same thing twice. Then they have this table here that is supposed to implement decapsulation. What it does is it reads the validity tags well between zero and two headers and then it applies one of these three actions, nop, remove single tag or to remove double tag.

What we might like, the property that might want to verify for this program maybe it implies some other properties like headers being valid or egress back being said, is that after this table is applied, it’s not the case that either of these VLAN headers is valid, the packet has been fully decapsulated. Unfortunately, if you try to verify this, it will fail. Because there’s nothing to stop the control plane from filling in silly rules. It could match a packet that has both VLAN tags being valid and then do nop and obviously that’s not going to decapsulate the packet. The way that we handle this is we want to be able to say things like I know that my control plane is going to put the right three rules into this table that match-up each combination of these [inaudible 00:54:57] with the appropriate action.

We have this interplay between the control plane and the data plane that we need to somehow capture to be able to do effective data plane verification. What we do is a standard trick. We basically instrument our program with additional state, we call it zombie state, but it’s really a form of what’s traditionally called ghost state. Basically we instrument the program so that we record the trace of the packet as it goes through the program. We generate a pseudo header type that captures the order of table applications whether they’re hit, which action was applied, which data was read and then we instrument each of the actions and tables to populate that data appropriately. Now we can say things like that table should be hit or if the packet has doubly valid VLAN tags then you have to apply the remove double tagged action.

With this we can make verification of programs that depend on something from the control plane happening, make it go through. Okay. I’m running out of time, let me wrap up by just describing where we are and where we’re going. Like Ron I snuck OCaml into Barefoot, they were a shop that did everything in Python and C++ and I convinced them that for writing these analysis tools, OCaml is a much better tool and we build heavily on libraries like Core and Async. Front-end handles the full language and as I’ve shown you we have a tool that can verify small programs and do things like produce human-adjustable counter-examples. We have a bunch of optimizations that I didn’t really have time to talk about, they’re needed to make verification scale.

This is actually, I’ll just say one word about this, this is one of the things that I think is neat about this domain is that unlike a lot of other program verifiers our goal is to have essentially zero annotations needed to use the tool. A lot of other program verifiers require programmers to do things like annotate functions, or annotate methods in order to do modular verification of programs. Our conjecture is that even the largest P4 programs are 10s of thousands of lines of code and SMT solvers are really fast. If we are a little bit smart about how we produce these verification conditions we can actually have a tool that doesn’t require any extra annotations. That’s an important thing for making this usable by non-experts. Another important thing that you saw as we have a bunch of these flags that can automatically check for standard safety properties of programs like it only accesses valid fields, it doesn’t overflow various arithmetic, things like that.

We can handle the largest program that Barefoot has in well under a minute with these optimizations. Some things we’re currently working on, future work or ongoing work is tools that can automatically synthesize these control plane assumptions. The one place where we sometimes have to add annotations to the program is in assuming things about what the driver and control plane will do. There’s a really nice technique that’s used in a lot of software synthesis tools of basically having a loop that repeatedly tries to solve the task, in this case, verify the program. When you fail, you get a counter example and you try to generalize or learn from that counter example and then produce a new set of assumptions and you do that until verification goes through.

By restricting our vocabulary of these control plane assumptions to a very small subset of logic, we think for most programs we can actually synthesize or propose to the programmer these control plane assumptions automatically. We’re also very interested in using this for automatic test case generation. Producing packets and control planes that exhaustively explore the behavior of a P4 program. Something else that falls out and we’ve just started to explore is we can actually use this to decide equivalence of P4 programs. You can imagine if you’re an engineer at a company using P4, you have some P4 program that’s been deployed and now you want to add some new feature. You might like to know that that new feature maybe doesn’t change the behavior program or if it does that it’s in some nice way.

We’re starting to work on tools that can do that kind of equivalence and differential based testing automatically. Okay, just to wrap up, networking has gotten really exciting in recent years and especially the interplay between programming languages and networking has gotten to be super fun. The way I think about this is really for the first time in 20, or 30 years, the techniques and approaches that we use in the software domain are really starting to be applied to networking. In particular, language-based approaches, defining abstractions precisely, having compilers and runtimes that map between those obstructions is really the way that networking is being done. There’s a real chance for people in the Pl and four methods community to start applying their tools to this domain.

There’s still a lot of work to do. Some things that go beyond what we’re doing right now is verifying properties that involve multiple devices and shared and state. That’s a very hard problem but again, I’m a little bit optimistic that in this domain things are simple enough that we might make some headway on that. There’s also quantitative properties, things like reasoning about congestion or load balancing, that goes beyond the simple reachability analysis that we’re doing. Then a big area that I’m excited about is how can we make these tools usable by really non-experts?

There are reams and reams of papers developed by the four methods community and a lot of them end up dying because they don’t address this critical issue, how do you get programmers or even in the case of networking network engineers who are not software developers to be able to use these formal tools. Let me stop there and just acknowledge my great collaborators at Barefoot and at Yale. Bill Callahan, sorry, was a intern this summer at Barefoot from Yale and he built large parts of the backend of our tool. Then JK Lee, Cole Schlesinger, Robert Soule and Han Wang are all collaborators in this work. All right, I’ll stop there and take questions.

You look like you’re providing this tool with specific input, you didn’t provide input verification past, you provided input it failed. My understanding of formal verification is that it’s supposed to work and you prove under any theoretical input that the program will behave as expected. What good is it to say well, if I input three I’d only get nine, with four I don’t know?

Back up, first, you can model a P4 program as, so in the absence of state, it’s really just a function on some giant bit vector. The packet comes in and some big bit vector and what comes out of some big bit vector. For a single device, things are actually nice and decidable in that domain and we are doing exhaustive verification. We are considering abstractly and symbolically every possible execution of this program. The only place where we make some assumptions about specific behaviors is in programs like this where the program is not actually correct. It does not have this property. But for reasons having to do usually with how resources are allocated, the programmer chose to write things this way. We can have a long discussion on why they do that.

But it’s not dissimilar from what might happen in C if you were, say, using arrays maybe your program you want to check the bounds on those arrays but for efficiency reasons the programmer wants to omit those bound checks. Maybe elsewhere they’ve checked that the inputs will have some special property it means that the bounds will never be violated. The same things going on here. The only place where we make some extra assumption that cuts down the set of traces we think about is when we do things like assert that this table will hit, meaning it will have some entries that match all packets or even that certain combinations of these bits will be paired up with particular actions. There’s a question of how could you validate those assumptions.

In our case, the way that we think about this is that there’s a runtime monitor that you want to wrap the control plane with and we haven’t built this yet but essentially all of the assumptions you make about the control plane, our tool would emit a little shim that would then sit between the data plane and the control plane and if you ever tried to poke the data plane with some messages that cause it to violate those assumptions, then the runtime will have raise an exception and says your program did something bad. Or if you were to start getting into building verified control planes, which we’ve done at Cornell but it’s not a big area yet, you can actually start to verify the software running the control plane and show that it has those properties.

All right. Thank you.

Where does queuing come into the model and do you handle that here and what are the effects can metadata the number of states?

Queuing right now is done in this Traffic Manager. The only way that the P4 program can communicate with the schedulers on the other queues is through some metadata. There’s some special standard metadata that can do things like select a particular class of queue or on this side read what decision was made by the scheduler. For us right now, that’s just other bits that we could write down properties of. Now, it’d be very interesting to be able to do things like show that for any distribution of input packets, my P4 program is going to correctly implement weighted fair queuing.

So you didn’t have hotspots.

Exactly. In principle, you could use this tool to do that but there’d be a lot of extra reason that goes on the outside, you have to somehow encode your distribution of input packets and then see what distribution and outputs that induces and check that things are balanced. That’s actually with my Cornell head on we’re working on quite a bit. But this tool doesn’t directly give you that reasoning. I think it’s a great area to work on in the future because, I drew this in red as a big red box for particular reason.

In the RMT architecture this thing is not actually programmable. If you have some new queuing algorithm you want to implement, right now this is most of the targets that implement P4 this is a fixed function thing that implements a set of queuing algorithms. There are recent like in the last year and a half research papers proposing architectures for programmable packet schedulers. My guess is, my hope is that in a year or two we’ll start to see new parts of P4 that let you express those scheduling algorithms and then verifying their properties would become interesting.

From my perspective as an ML programmer looking at the way P4 works it feels like there’s a lot of things you do in your stack analysis which I would have normally thought of as a thing I might do directly in the type system. I’m wondering if you like had knew a free hand design system from scratch how much of the verification you’re doing would you bake into fairly traditional ML type system? Or maybe other standard testing and how much do you think is really naturally better solving [inaudible 01:07:22]?

Actually I started out working on this in January, think about this as a type system. I was really inspired by Greg Morrisett’s work at Cornell from the early 2000s on typed assembly language and they proved a bunch of really cool properties of assembly language just using standard simple and polymorphic types. I actually thought we might do the same thing here. Actually went a fair way down that path but what we discovered is that the actual program people are writing today are not particularly amenable to a compositional type analysis. To get a tool that current P4 programmers would use, it seemed like this program logic verification approach was better. They’re relying on invariants that are not captured by a simple type system.

To answer your question if I were to go back to the start and false implies anything, I don’t think I have the luxury to do that but I do think that there are ways that you could imagine a safe in the type safe or memory safe packet processing language that would still be as efficient as what you can do with something like P4 then some of these things would fall out. I think a lot of the stock safety properties you might just get from the model of the type system and the only cost would be perhaps writing down some additional parts of your program annotated with types or you might need to do some runtime checks if you need to step outside the bounds of the type system. It’s a good idea. I would have liked to do things that way.

Does P4 allow you to define recursive actions?

No. There’s no loops in the program with one exception. Parsers can have loops but the P4 compiler in most implementations unroll loops statically. They figure out what’s the maximum length packet that could be used to populate those fields and you can write a parser that might redundantly loop lots of times before those fields and it just rejects those programs. In effect these are loop free programs.

That seems like you were able to make a lot of progress here as P4 is respected in this way and therefore the preconditions that you could use can are amenable to analysis. Do you think there is and this works for the data plane, do you think there’s similar restrictions placed on control plane programs that would allow similar techniques to be applied to [inaudible 01:10:09]?

For those of you who can’t hear and maybe for the recording the question is we’ve gotten a lot of mileage out of this low hanging fruit of simple data planes finite state and no loops and no complex data structures, can we do the same thing with the control plane? I think it’s unlikely that we could build a tool that would push button and nearly instantaneously verify properties of real world control planes. Just thinking about the size of the software stack, the division of labor between data plane and control plane. A large P4 program is 10 to 20,000 lines of code. Above that to re implement what’s in the standard control point software stack, it’s at least one order of magnitude more code.

Think of like 100,000 lines of code to build the driver and the switch OS and then the control plane app routing protocols or other function on top of that. That code, most of it’s just written in C or languages like Java. I can imagine that you could write that code in a disciplined way that would make it amenable to verification but the current control planes are not written that way so I think it actually very difficult to take off the shelf control planes and try to make them safer or correct. You can ask the same question that Ron asked, if you were to clean slate start and design nice control planes, maybe there’s a set of abstractions that would let you express the same thing and be more amenable to verification and I could believe that would work. But by taking real off the shelf stuff seems like a tough mountain to climb.

When you were running and [inaudible 01:11:59], the demo of the tools and passing in different flags, checking different sets of conditions, are those set of conditions expressed? Are they baked in or?

We’ve been iterating with some of the engineers at Barefoot to come up with properties that have caused them bugs and lost time in the past without a whole lot of rhyme or reason to it, just following our noses and the way that works is we have basically this front end, I think I can show you, if I remember where this goes. It doesn’t go in here. Sorry, I can’t show you. I don’t have a dump of the internal thing. But basically, all that happens is right after we parse the program we walk down the AST and we start adding the properties that imply that general safety property so to check that every access to a header is valid we just go find every place where a header is read or written and add a little assert before that saying the header is valid. I’m not quite sure how you would come up with a general or complete set of automatic properties. I guess you could think about…

[inaudible 01:13:02] a domain specific language you had for finding how you can [inaudible 01:13:09] those properties or [inaudible 01:13:11].

No right now we just write visitors over the SDN and it’s ad hoc.

Okay, we will cut it here.

Great.

Thanks, everyone, for coming.