Jane Street
Tech Talks

March 8, 2018

Effective Programming: Adding an Effect System to OCaml

Leo White
Jane Street

Type systems designed to track the side-effects of expressions have been around for many years but they have yet to breakthrough into more mainstream programming languages. This talk will focus on on-going work to add an effect system to OCaml.

This effect system is primarily motivated by the desire to keep track of algebraic effects in the OCaml type system. Algebraic effects are an exciting new programming construct for implementing control effects such as concurrency, co-routines and generators. Through the Multicore OCaml project, support for algebraic effects is likely to be included in OCaml in the near future.

However, the effect system also allows for tracking side-effects more generally. It distinguishes impure functions, which perform side-effects, from pure functions, which do not. It also includes a tracked form of exception to support safe and efficient error handling.

This talk will give an overview of the effect system and demonstrate a prototype implementation on some practical examples. Time permitting, I also intend to talk briefly about some of my other work on OCaml, including modular implicits and macros.

Leo White

Leo is a member of the Tools and Compilers team at Jane Street, and part of the OCaml compiler development team. He previously worked as a researcher with the OCaml Labs group at Cambridge, and continues to be actively involved with programming language research.


So this is going to be about some work I’ve been doing for the last couple of years with some collaborators down here at the bottom on adding algebraic effects to OCaml. In particular, looking at how we’re going to add an effects system to OCaml to manage these effects and more.

So a good place to start–we’re going to talk about algebraic effects, talk about effects and what they are. Side effects are the observable effects of running a computation other than its return value. So if a function takes an input, takes an output, anything else about it that you can observe, that’s basically a side effect. What algebraic effects are, essentially they’re a mathematical approach to dealing with the semantics of side effects, where essentially you don’t think about effects as a set of effectful operations that interact with their context. So mathematically, a completely inadequate description, but hopefully enough that you have intuition for what we mean by algebraic effects.

So let’s say we’ve got some effectful operations that we’re trying to execute. I should start with how many people recognize OCaml syntax? That should probably be… It’s a lot but not everyone, I’ll try and point out what bits we want. This is some little bit of code here that’s going to perform this get effectful operation and then set the result. The idea here is that we’ve got maybe a single piece of integer state that we want to find. So we’re going to get its value and then add one to it then set that back to the state. So how might we think about how this executes? Algebraic effects way of thinking about it would be to be say that you’ve paused the execution. So when we get to this get state, we’ve paused the execution, so my question marks are like “We’re here.”

And we’re going to go out to the context environment that’s running the computation and say, “Could you perform a get operation for me?” That’s going to go and do something useful. Then when it’s got a result, we can resume the computation with the result, in this case five. Yeah, carry on going. Thinking of effects this way, we can separate the specification of how we execute pure computations from the specification of what each effectful operation actually does. You can talk about what this is doing without actually getting involved in how get and set are actually going to operate. That’s all well and good in terms of trying to find the semantics of side effects. How is it useful for everyday programming? It’s not going to be useful for us today.

Essentially, what we’re looking to do, what we’re going to use the algebraic effects for, is for defining new side effectful operations that aren’t supported by the language that you’re in. Find what you want to do there, so you could be in some language where functions have natively no side effects and that would be like Coq or Agda or something. Or where they have very few side effects like Haskell. You might want to be trying to use some state in your function even though there’s no support for state in those languages. Or you might be in a language like OCaml, which has all the ordinary side effects that you’d expect to find. It’s got some state, it’s got interactions with the operating system, it’s got exceptions. Maybe there’s some other side effects, something maybe a little more exotic that you’re interested in using. So you want to be able to implement them. You want to be able to write your computations as if OCaml could do this side effect and then describe how to implement it in terms of accounting.

In particular, the side effects that we’re interested in that motivated us to look at this in the first place is concurrency. Here I have a concurrent computation, in some OCaml syntax and using the algebraic effects syntax. So, OCaml syntax first. We’re defining a function foo, a recursive function foo here. Taking two parameters, id and depth, and basically if the depth is greater than zero, we’re going to print some statement and then this here is the algebraic effects bit. Then we’re going to ask our context, we’re going to ask the context in which we execute, if it can perform a fork operation for us. In concurrency terms, that’s like we want you to fork off this thread to run concurrently with what we’re doing. Then we’re going to print some other statement, then fork a second thread. In the other case, when depth is zero, we’re just going to print some statement and then again another algebraic effect, we’re going to ask the environment if it can yield for us. If you’ve got two computations running side by side concurrently, a yield is saying, “Oh, go and let the other guy have a go for a bit.”

That’s what you would write in terms of algebraic effects. The important thing is how you can then run such a function. What you need to define, you’re going to run some concurrent functions, you need to find out your scheduling. This is what’s called an effect handler. Here, we’re performing some effects and in this part here we’re going to be handling them. I should just run that program. So yes, this function takes two integers and returns a unit. Now, in the scheduler, the important part, the algebraic effects part is these two cases down here. This is a very simple scheduler, we’ve just got a queue of things to do and we’re going to push things onto it and then pop things off when we want something new to do. I should say, I suppose, this match with… Most people, if you’re familiar with pattern matching syntax, so it’s saying, run f and then compare the results with this. As I said, the important part is going to be these two down here, this yield and this effect fork.

This is telling us how we’re going to implement fork and yields when running f. So the schedule’s going to take in f and it’s going to run it, and if f performs a yield effect, what we get is both, we have the effect yield and we have this k. This k is a continuation, so it’s representing the computation we were running, the computation at the full yield, if I go back to our little view. It’s like this thing boxed up. We’ve got this computation, it’s waiting for some value. We’ve wrapped it up as a continuation and then we’ve jumped down to here. What we’re going to do when something yields, well, we’re going to push that continuation onto the queue of things to do. Then we’re going to pop something else off the run queue and do that instead. You see how that’s implementing yielding behavior. If someone wants to perform a fork effect in f, all that’s going to do is we’re going to take the continuation, take what we’re currently doing, stick that on the queue and then we’re going to recursively go back around and run f with its effects also handled.

Also, I should say that this is intended to be more like a demo than a lecture so feel free to ask questions as we go. Right, let’s segue. So that’s a function, takes a function as input from unit to unit and returns a unit value. If we run this, if we schedule a call to foo down to depth three, it will execute all of those print statements and you can see that they’ve become interweaved. So they’ve forked some things and yields and then forked some more and yields and so on.

So that’s how you would implement concurrency in terms of these algebraic effects, in terms of this idea of performing things and then handling things. Some of you might recognize this perform and handle idiom from something, which is common in a lot of languages and is already handled in that way and that’s exceptions. So you can think of perform like raise and you can think of the effect handle, just like an exception handle. In terms of run time semantics, you can just think of algebraic effects as resumable exceptions basically. So, it’s an example using exceptions, you’d find some function. This is a function that’s going to go through a list looking for some element that satisfies the product of P. If they can’t find one then, it raises not found. You can use that, but you can just run it.

So here, we’re looking for an even number in the list, one, three, four and then we have two to the result. Of course, a downside of exceptions, which we’ll come to later, is that if there is no even number, then we get this unhandled exception and I’ll talk a bit about that later on.

Here we have computation forms or raises. They’re not found exception counts for us to use that. Most languages provide you with a kind of try with construct. That’s a normal thing you get. You execute this code and if it raises not found, well, then we just return none, whereas if it completes, we’ll be returning some. OCaml already provides a syntax that is much closer to the effect handle wants. You can see the similarity more clearly. Right here we’ve got a match on this computation and if it forms an exception, the exception not found will return none and if it completes successfully, it will return some, y + 2. The reason we use this lower one for effect handles mostly is it’s much more precise than this other one. If, God forbid, someone changes plus to raise not found for some reason, in this first example, if it raise not found we get caught here, which is clearly not what we meant, whereas in the second one, the plus is outside the scope of the hand, so that’s why you go for this match style. It’s a better way to write handles in general.

Right, so a couple more examples of implementing effects using side effects and algebraic effects. This one is going to be a generator. Like concurrency, this is one that a lot of languages have recently been adding native support for. But again, rather than coming up with some native support, generators have been… Just kind of algebraic effects, then you can just let the user implement generators however they want to. This function here, the evens function, I’m sorry, I should tell you what a generator is. So, generators, a lot of people describe a generator as a function that returns multiple times over and over again. For the purposes of this, I would think of a generator more as a function that takes a value and returns a value but as a side effect it produces some stream of values.

From that perspective, you can see how this functions as a generator, so we have some reference that’s kind of a current count, we have this infinite loop that’s going around, incrementing the reference and if the number is even, then we’re going to perform some return effect so that it’s returning a value to our contacts. In order to run something like that, to run a generator within that style, you need again, an effect handler. So this effect handler is just going to, it’s total function. It’s going to take some number, take some generator function and it’s going to add up the total of all the things returned, of the first n things returned by that generator. So you can see you’ve got this kind of count, which is a mutable reference containing an integer and total another mutable reference containing an integer. Then we’re going to run it. If f returns normally, then we’ll just return the total that we’ve counted up so far.

If f performs a return effect, what we’re going to is we’ll add the x that it returns to the total. We’ll increment the count by one and if the count is bigger than n so that we’re finished, we’ll just return the total. Otherwise, we will resume the computation with this continue function here. It’s resuming the continuation. So there’s a call down there to answer the total of the first five even numbers which hopefully is 20. Yep, you can see how that’s…



The first branch of that match, will it ever match?

This one?


Sorry, I’ll repeat the question. The question was will that first case in the match ever match? So not for this call we’ve got here with the evens because evens is producing an infinite stream, but if you…Let’s just pop back to evens. So if we say…If I changed…

If [inaudible 00:15:17] is at the bottom, it’ll do it. It’ll take that branch.

Yeah, if it was only while x was bigger than one, then it would’ve taken that branch.

Right, and one last example of effects. We’ve got just state. Obviously, OCaml already has state so you might not want to implement it as a direct effect, but to show the completeness of how you can implement the basic law of side effects this way. We have this example. So we’re going to assume here just a single, again another one, just a single piece of integer state that we can set with set and we can get with get. This function here is computing the factorial of something and starts by setting the state to one and then it iterates from one to n, getting the current state, timesing it by the current index, and then setting the state to that again. And then at the end, we just get the result.

We have these computations performing these get and set operations. We need the handler in order to run it, so here is a handler for a single piece of integer state. Slightly more tricky to get your head around than the other ones. If you’re familiar with the state monad, it looks basically exactly the same. What this handler is doing is it’s transforming your computation into something that takes, into a function, it takes an initial state and returns a value. So in our return case, this time we… So if f(x) just returned something, then we produce a function that ignores the state and just returns the value. If f(x) performs the get event, then what we do is we continue the continuation and pass it the current state.

So here we have continue k s, that is, resume the computation and pass in the value s, so that’s the current state. Then because what we’re making is a function that takes a state, we then pass that to the state as well. The set effect becomes a function that takes a state, ignores it, continues the continuation with unit as its result and threads through the new state value that you cast it. We run this one by passing at zero, so our initial state is going to just always be zero. As I say, it’s a little tricky to get your head around that one, but it’s a fairly standard construction. If I run it, it does successfully produce, yeah a factorial of five, which is good. So yeah, that’s a load of examples of why you might want to have algebraic effects or how you can use them to implement side effects that maybe your language doesn’t support.

A quick note on the implementation of roughly how at runtime what are they actually doing? Here’s our standard effect handler. It’s going to match on some body, there’s a return case, there’s a handler. Essentially, whenever you enter the body of an effect handler, you’re creating a fresh call stack. We know when you run a program, you’re running a call stack that’s tracking the return addresses of functions you’ve called, so when you go into a handler, you’re creating a fresh stack. These stacks that you’ve allocated and they’re dynamically resized, so we start with a very small one. If you call a few functions, then it’ll grow. Maybe call some more, it’ll grow, up to whatever size you need. Then when we form an effect, what we do is we currently take that stack that we’ve made, we wrap it up as a continuation, we jump back to the previous stack and we run the handle there and we can pass the continuation so it executes on the earlier stack.

When you then continue that continuation, we reinstate that stack, jump back to the top of it, and carry on where we were, where we left off. So that’s a quick overview of how they’re implemented, won’t go into huge details.

Sorry, so presumably this continuation has a type. Does that type involve only the value that you’ve passed to the continuation to resume it or does it also include the type of value that will eventually be returned by that handler?

It has three in fact. It’s got the input type that you’re passing continuation, your return type which is the type to be out of body. It’s also got the effect that might be performed by the thing as well. I haven’t got that far. We’re going to track types and talk about the type system later.

So that’s roughly how that handles it. So far it’s all good and it’s all quite well, but as I hinted at earlier, there is an obvious downside with this approach and having these algebraic effects. If you define your effects as operations that interact with their context, you’re clearly running the risk of trying to execute them in a context which doesn’t know how to execute those operations. That’s clearly a problem and in the degenerate case, in the exception case, is a very familiar problem for anyone who’s ever used Java, in particular. That is the not found exception unit. An unhandled exception basically.

You’ve run this find computation to find even numbers in the list one through five. It’s attempting to essentially perform the effect not found. You haven’t told it what to do in that case so your program is forced to just stop and say, “I have no idea what to do.” At best you’ll get a back trace, so that’s obviously not ideal. The same thing happens with more general effects so if I try to run our evens generator without telling it how to do it, for technical reasons, it says the word exception but what it’s complaining about there is the fact that the return effect is not being handled.

Now one way of implementing side effects in a language that doesn’t suffer from this problem is monads. There are two slides on monads and if you know what they are then hopefully they will be informative. If you don’t know what they are, hopefully this talk means you will never have to, so I’m just going to leave it there. Fundamentally, this is basically an exception monad, which we could use instead of raising an exception. That’s got return and bind which do big things they should obviously do from a monadic perspective and we can use them to implement our find function. So you can see now it uses error instead of raising an exception and it returns X. Down here where we run the find, we have to use bind to get the results out.

Another important point from the perspective of this talk is that unlike the code we were looking at earlier, unlike this find and this result…So if you look at the types here, there’s no mention of not found anywhere. We’ve got no idea when we look at find but not finding might happen. Or even here that there was a risk that it happened and we didn’t see it.

As we come back to the monadic one, we can see…so result is the exception of it…You can see it right here in the…

So you can see right here, right on the far right, in fact. You can see right here, you can see not found result, right there in the type of find and again with this X, you can see not see not found results. When we use the monadic approach, we track the side effects in the type system and therefore prevent this problem of having unhandled effects.

That’s good, but monads have some problems compared to algebraic effects and we’d quite like to avoid some disadvantages at least. One is that when you use monads, you kind of split the world into two things. You say there are pure functions and they run and then there are these functions that perform computations and they’re very separate. You tend to end up, the higher-order functions, you tend to end up with two copies. You have this map, which tells you how to map a pure function over a list and then this map which tells you how to map a function that does effects over a list. Really there’s no need for both.

You can use the identity monad to try and just get away with map M, but then you’re just forced to use monads just everywhere and every type signature and it’s kind of annoying. That’s not ideal. It’s awkward to use two monads at once sometimes, in the same expression. You can use monad transformers to try and build up a monad or you can mint a fresh monad. Neither of which is really what you want to do. Yeah, there’s a lot of finicky annoying boiler plate with monad transformers. That’s frustrating.

Then there’s the performance aspect as well. So unlike with those effects, which have these stacks of running code as normal here, if you look at…Essentially you’re creating a lot of intermediate data structures as you go through this computation and you’re also creating a lot of closures as you go through this computation. To get back to good performance, you’re going to rely pretty heavily on your optimizer and especially your in liner. You don’t really want to have to do that if you can avoid it. In lining is always heuristic based. It’s always going to be a little fragile. It takes away from the predictability of your performance.

We’d rather not just use monads. We’d like to use the algebraic effects, so what we’re going to do is we’re going to solve that problem at having these unhandled effects. So how might we do that? There’s a fairly large body of work on the idea of trying to track effects in the type system and a well called effect systems. Quite a bit of variety, but they all look like this if you’re familiar with type systems.

I’ll move on to an easier description in a second, but fundamentally what we’re going to do is we’re going to say the differing expression not only has a type but it has some effect, so you’ve got to track both. We’re going to attach these effects to the types of functions, a more layman friendly presentation. Essentially something like perform get, hello plus one. It has the type int but it also has the effect get stringed to int.

Then what we’re going to do, is we take that expression and we wrap it in a function. Then we’re going to get a type that looks something like this. So what we’ve got now is now f is a function that takes a unit parameter, returns an inch, and at the same time it performs the effect get that goes from string to int. I should say these are structural effects so it can perform get if it likes instead. You don’t need to define them anywhere, the label is sufficient.

These effect descriptions that we’ve got between these two square brackets. I’m going to say a little bit about them. So essentially what these effects are doing, they’re telling you that they’re a description of the stack of handlers that need to be in your context to run this function safely. So they’re a list of effects, basically. But you can permute, the order between effects of different labels doesn’t matter, so if you want to run in a context that supports both get and set, it doesn’t matter which way around you write them, so those are the same. But if you are in a context which, and it can happen, you get an int and then get a string, then you can’t permute so you need to have a handle of the handles get and int, and then outside that handle, the handles get and string. It’s kind of like lists but where the order can be permuted if the labels don’t match, that’s what in the type system they are.

Basically, it’s always going to be safe to ask for more effects. A function that only requires there to be a handler for get, it doesn’t care if there’s also a handler for set, that’s not a problem. So you can always subtype these to having larger effect descriptions. Now we can look at some examples with these types. This is the head function, so this is just trying to take the first item off the list. So we’re having a pattern actually. This might be the first proper pattern actually in the whole talk. Basically the head takes a list and if it’s the empty list, it’s going to throw empty. Otherwise it’s going to return the first item.

This throw is…essentially this is like raise. We’re going to leave the normal raise there in the language we use for things like programming errors, which really should kill your program and add an additional ability to add these trapped exceptions, something which has maybe become a bit of a dirty word, Java’s terrible attempts to check exceptions, but that’s essentially what we have here. So you can see at the type of head, it’s now very clear that the head takes an alpha list, returns an alpha, and it’s potentially going to throw this empty thing. So we can see in the type for this case needs to be handled. Indeed, we try and run head, we try and run head at the top level, the type system’s not going to let us. It’s just going to say, “this binding performs a unhandled effect Empty.”

If we want to use it, we’re actually going to have to… If we surrounded that in a handler, then it would be fine and it would run. And so we can return to our evens example, our generator. Again, look at the type. Now it’s pretty clear, rather than being unit to unit, which is a pretty worrying type for something that does so much work, it now has unit to unit and it returns it and expects unit to return. Again, just like with the throw case, just like the exception effect, we will get an error message if we try and run this in a context which doesn’t know how to deal with return.

The last part that we need in order to type check things with effects is effect variables. So basically, we’ve got a way of describing now the fact that some function expects some effects to be handled. We also need to be able to describe how higher-order functions and how the effects flow between them. So here, I’ve got the type for list.map. Essentially, what you want to be able to say is that if the function you passed list.map on some effect, then running lists.map is also going to perform that effect. That’s what’s so important. So here, we’ve got this ‘e’ and that’s an effect variable. It’s very much like this ‘a’ here, which is a type variable in OCaml. It’s just an effect rather than a type. It’s a different kind. It just represents that any effect that this function might perform is also going to come out here. That’s what effect variables are for.

In practice, the vast majority of functions behave in a very similar way when it comes to this. You take in… Either their not higher-order and they take in no functions and then negative cause of effect. Or they are higher-order functions and they take in some functions and input, and then at the very end, they just perform all of the effects that came from all the functions. This is normally what you do, right? You could have something that didn’t do that. You might take one function in, take another function in. Wait for one more argument, call the first function. Wait for another argument, call the second. But it’s very rare. In the vast majority of cases, you only need the one effect variable like this, just the effects that are going to happen at the end.

With a bit of syntactic sugar, we can make our map type much more easy to deal with, so essentially now, we’ve replaced this bang ‘e’ by just a tilde on our arrow and that just represents us having…It’s just a special effect variable called bang tilde. It’s just shorthand for that. You can basically write all your function types with just the tilde arrow and the straight arrow if they’re not using any effects. That makes things nice and succinct, pretty easy to use.

I think they’re easy to comprehend as well. You can just see that a map takes a function, does some effects and then does those effects itself.

Is that the same as the function being pure or…?

Is this the same as the function being pure? Depends. It’s kind of hard to talk about… I would say the map is pure because it doesn’t do any effects but obviously it runs a function so if that function’s not pure then the map isn’t pure either. It’s polymorphic in it’s purity essentially. So this is effect polymorphism we have here.

So, I thought I would try not toy demonstrations, because it’s always nice to see some larger code. What I’ve done is I’ve taken so…My colleague in London, Jeremie Dimino, has written a build system, if you’re in the OCaml community, you probably use it. It’s called Dune. It has its own concurrency library because it tries to not have many dependencies so it doesn’t want to use async or lwt. What I’ve done is I’ve taken that library and I’ve converted it to use a fence. So, we have a little look at the kind of monadic interface for the original library so you have fibers. So he has a fiber type, an alpha team, which is your type of fibers, and that’s a computation that does some concurrent effects. It has the usual monadic-y stuff of having return and bind and map, but we don’t really care about that. The important things that he has are really these two functions here, fork and wait.

Fork is going to take a unit to alpha t, so that’s going to be something that’s…It’s a computation that returns an alpha, a concurrent computation returning an alpha. It takes one of those in and it gives you an alpha future t. Basically, you’re going to fork off this thread and get given back this future. Later on you can call future.wait, which will basically pause your thread until the other one is finished and give you the value that it returned. That’s the two functions at the core of this library. We have fork here and wait.

A variety of other ways to fork. Some monadic stuff you don’t really need. There are many ways to fork in this library. It’s also got some great variable stuff but it’s like a reader monad that I don’t want to talk about. Some error handling I also don’t want to talk about. Then he’s got mutexes at the bottom and ivars which are again, they’re like mutexes, they’re a kind of way to control concurrency. Finally, he’s got yield, which is also a useful one for cooperative concurrency, and this takes a unit and returns you a unit computation that will likely yield in a high level schedule that will switch you off to some other thread.

Then this run function which is what’s going to actually execute your concurrent computation, so it takes an alpha T, takes a concurrent computation, returning some type alpha and then returns you the value that you get from running it. So when we turn that into…We use effects instead, the interface starts to look like this instead. So we have this effect, Async. I’ll talk about its definition in a moment. Now the fork function, rather than having this computation type t, we just have our fork function. It just takes a function that goes from unit to alpha doing asynchronous things and then it does asynchronous things itself and gives you an alpha future.t.

Wait is just a function that takes an alpha future, does some Asynchronous effects and returns an alpha. I’m basically going down the API. The changes are all basics of that nature. We’ll take away all the ts and we put Async effects into the right places. You can see yield for example. It’s now a unit to unit function that does Async. The run function is now… It takes an alpha to beta function doing asynchronous effects and turns it into an alpha beta function that doesn’t do asynchronous effects.

Can you make the font a bit bigger?

The font a bit? Yes, I can. Sorry. I probably can, hang on.

Okay, there we go. Sorry about that.

Was any of that followable seeing as probably no one could read what I was pointing at?


Never mind. Right. Where was I?

Run, so run is now going for an A to B performing Async, turns it into an A to B that doesn’t perform Async. You can see an effect handler is a very obvious type that just removes the effect from the type of the function. That’s our scheduler. In terms of the implementation, I definitely don’t have time to go into the implementation in detail, because it’s quite long if nothing else and quite complicated. Although it was complicated before I started playing with it. I didn’t add the complexity.

But I did want to show roughly what I was doing with the definition of the Async effect. By the way this statement here is just defining a new effect so it’s just defining a use for this thing on the right. Basically, I’ve got these big op type and this is all my asynchronous operations. Basically, I decided rather than to put them all as separate effects, to just have one effect and then a G and a T to decide between the operations. You can just think of it as a giant effect type here with a lot of different things in it because there’s a lot of different things this language does that are concurrent. You could probably get away with about three if you’re willing to sacrifice with performance.

And the reason that I’ve done it as this kind of op type, separate from the effect, was so that in the interface I could make this abstract. So I’ve hidden from you what actual operations I’m doing. As far as you’re concerned, I’m doing some abstract set of operations more labeled with Async. That’s just basically a design decision. You can choose between exposing things and therefore making your computations more extensible, like they could be run by some other scheduler for instance, if I had made them all explicit. Or I can use abstraction like I have here to hide them mostly because I’ve got loads of them and the design is kind of unsettled, so I didn’t really want to let everyone know what was going on and then break that code when I changed it later. Anyway, so I’ve done that.

The more interesting thing really is how it feels like to use it. This is really annoying that I can’t type space.

Perhaps just copy a space character and just paste it in.

Yeah, I will actually do that. I’ll have to now. It was working fine earlier, I have no idea. Anyway.

Here’s just a simple concurrent program using the monadic library. The monadic version of the library, it’s a nonsense program. It’s hard to come up with small examples that aren’t nonsense. What is it going to do? I have this say function. It takes two strings and a number and it’s going to say both the strings a number of times yielding between them. So you can see that it loops recursively here where I decrease it each time until zero. It returns unit and it prints things. We have to use bind here. We use the yield and then we’ll make this thread which is going to fork off something that’s going to say, “Hello, world,” five times and simultaneously it is going to say goodbye twice with a yield in the middle and then it’s going to wait on the other one. It’s going to wait until the “hello world” is finished before finally printing done.

So that’s what the code looks like with the old library and am I going to run it? I am going to run it. It does what it should. Hello, goodbye, world. And then it stops saying goodbye and just says hello world until done. Here is the code with the effect, using the effect-based version and basically all the binds and returns have gone. That’s pretty much the thing. It’s very, I think, quite satisfying to be able to just write things in direct style rather than in monadic style. Obviously, you can use denotation to try and tidy up monadic style, but really it’s just nice to use semicolon for things.

So here this time, rather than as we were, having to perform here with a bind on it and create a closure, we just perform the yield function. Here, rather than binding on fork, we just call fork and let bind go as a future, which we then use down here to wait on it. Which is all very good and you can see in the types that we’ve now got this… I’m thinking my type print is not great. I need to improve it. It should just say Async, but it’s expanded it out because I haven’t paid much attention to fixing that. Anyway, you can see that say is going to take two strings of an int, do some Asynchronous stuff and return unit. And our thread function takes a unit, does some Asynchronous stuff, and returns a unit. It also hopefully does the right thing when executed.

Now, to some specific advantages of switching our monadic thing over to these algebraic effects for our concurrency. So the first is that it interacts well with higher-order functions. So let’s say I had a list of strings, in this case the numbers one to five and I just wanted to fork off a thread for each of those strings. With the effects thing, all I have to do is just list.map over the strings and run fiber.fork inside it to fork off this thread. Whereas if it was monadic, I’d have to use map M or in OCaml, I’d even more likely have to have something called fiber.list.map and use that instead, and create these extra functions that we don’t really want to have. But here, I can just use list.map, it’s fine. Similarly here, where once I’ve made this list of futures of the same lot of numbers, I can wait on all by just iterating a call to wait on all of them.

There we go, that runs. It just says a lot of numbers. I think the examples are getting sillier as I go along. I think I was running out of ideas. Here’s another one. I think this one’s vaguely plausible. So this function is discerned. So we’ve got some list of…It’s like an associative list, but each one of them is a future because it hasn’t finished yet and all it’s going to do is it’s going to try and find you the item in the list with the right key, but it’s only going to wait on the futures that it needs to as it iterates down the list. That’s what this function does. What this is demonstrating is that we can easily mix two different effects together. This function obviously, because it’s quite possible that you’re looking for a key that isn’t in your list, it throws one of our effect exceptions when it can’t find one.

It also is going to be using future.wait to wait on things. I think we type checked that, we can see that it indeed does perform to effect. So it takes a b c fiber future list thing and then it performs Async operations and not found returns bool. And so again, if we were dealing with monads, we’d be there, but we’ve created our own fiber plus error case monad. Question, yeah?

How come not found doesn’t have a type associated for the effect?

Oh, yeah, that’s because it’s an exception one. See how I used throw instead of the form? So it can’t come back, so there’s no return type and then I didn’t give it any arguments, so there’s no argument type so it ends up being just like that basically, that’s all.

So if you use throw, it is impossible for it to be resumed, so there’s no need to talk about what type it was at. We distinguished the most expansive performance, because exceptions can run much faster than non exception ones because you don’t have to keep the stack lying around so they kind of split into separate things.

As I was saying, if you were doing this monadically, you’d have to create some fiber plus error monad or you’d have to make fiber into a monad transformer, make your error into a monad transformer, and then define if it would resume its task and then to find the instances that do the lifting across the two of them. That’s quite a lot of work that I don’t want to do, I just want to write this, or something useful that looks like this.

We’ve got now this type system that tracks all our algebraic effects. We can use algebraic effects to implement new effects in our language of choice, which, because of where we are, is OCaml. Could we go a bit further than that and try and deal with, like what about essentially the effects that OCaml supports natively? OCaml already has all these other effects. Could we just track those in the type system as well? It’s nice to know when functions are pure, even if they could run in any context. We’ve got this effect system and it’s successfully keeping track of the algebraic effects so we can use those for implementing new effects in our language. But since we have an effects system, what about tracking the effects that OCaml already does? Like OCaml already has support for the states and I/O and things like that. It might be nice to know when functions will appear and when they won’t. Pure functions are easier to reason about. Their behavior is constrained in more ways so that you can think less about them and so on.

Could we use the effects to track those? How we’re going to look at that question is we’re going to think about could we implement it with the effects that OCaml already has as algebraic effects? I think we saw this function before. This is our factorial function, but this time it’s just using OCaml’s normal references. It creates a reference cell with a one in it. It’s rated between one and n. Getting the value out of the reference cell, timesing it by the current index, and setting it back into the reference. So we could, instead, replace this, these uses of the references in OCaml will provide [inaudible 00:48:27] with some algebraic effects so we could replace the allocation by performing some new effect, which would give us a reference r, which we would then pass through get effect and again to a set effect.

What we need in order to try to run OCaml’s state effects is some handler that will work for those operations. Here we have a kind of generalized version of the state one we saw earlier really, except now rather than just having a single piece of integer state, we’ve got this map from our keys, the references, to the value that’s stored in them. So you can see that if the function f that we’re trying to…If the computation returns, then we just hit something that takes its store and ignores it in terms of value. If it performs this new effect, we want to create a new reference while we add the key, which we’d be using the integers as keys…We add the new reference location into our store, set to the value. The value will be passed, and we increment like this counter on the right, which is our counter of what’s the next free reference. Then when you get, you’ve got this location, you’ve got this reference, which is just an integer that you can look up in your store.

We use that to look up the value of its stored effect. Carry on. In the set case, again, we get given a reference location which is just this integer into our map. We use it to set the value of that reference. Then we start the computation with an empty store and zero is the next free reference number we use for our store. This will indeed run our function. So yeah, so you can see int is being used as our reference type. You get an int to the set, takes an int to the B, and gives you a unit. Get takes an int to a b and u takes a b and returns a c. So yeah, we run our factorial function, with this handler, it doesn’t even work correctly. So we could implement OCaml’s built-in references by just using our direct effects. The only subtlety here is that we have to be very careful. We want to make sure that the references that we pass here, have always come from new. We don’t want some other reference. So if we had two of these handlers, we’d have to make sure their references didn’t cross over.

In OCaml, we could obviously just say, “Well, it’s just one giant handler inside the entire program that deals with all of them and all of the references come from this handler and all the gets go to that handler.” So we will be sure that this find operation here is not trying to look up something to do that we haven’t or we put [inaudible 00:51:32].

Would you actually want to implement references or mutable references in OCaml like that?

No, not even slightly. It’s going to be horribly inefficient. I think when you compare it to just setting some memory, that’s difficult to compete with in terms of performance. The point is that we could. And we’ll use that in a second for a useful thing.

Yes, in fact, we can do the same thing with I/O. Here’s an I/O function that’s going to open this talk, read bytes 9 to 18 and then some other bytes from 27 to something else, which are in fact the title of the talk, taken from the top. So it’s using seek and open and input to just read values from the file. So we couldn’t picture this as being all of those operations as just being algebraic effects. It’s not clear how you write a handler that does them, but you can certainly pretend that your interpreter has some handler in it that knows how to do this. You can just treat all of these calls as employing effects that are being dealt with by the runtime, which has some handler around your whole program.

When I say you could think of it this way, the point is you can’t tell that it’s not what’s happening. You can’t observe that this isn’t how OCaml is influencing your side effects. So what we’ll do is we’ll just type check it like this is happening and implement it the fast way. How would that look? Well, I’m not going to quite touch it exactly. I’m going to abstract it a bit and hide precisely which effects it is OCaml knows how to deal with. I’m just going to say that there’s all these effects like new and get and open and things. I’m just going to say, we’re just going to call them all I/O. And let me tell you what they are so that OCaml can freely add new ones and remove them without us all getting upset and ruining the touch of our programs. We’re just going to have this single I/O effect that represents all of those algebraic effects that we’re pretending OCaml supports natively. So now our print_endline function from the example before is now not a function from string to unit, it’s a function from string to unit that performs the I/O effect.

Obviously it’s not going to be very backwards compatible if I was trying to make everyone write the word I/O all over their existing code, but with a bit of syntactic sugar, we can keep things all backwards compatible. I probably should mention, all of this so far and throughout the entire talk is backwards compatible with current OCaml.

So now our impure functions, they stay as string arrow unit, because we can’t change what that arrow means and what that arrow means right now is: function that performs OCaml’s built-in side effects. That’s what that arrow means, we cannot change that. We leave that the same. We have a new arrow, which is: function that does not perform any of OCaml’s built-in side effects, and that here is this little double headed arrow that we have here. In fact, I actually have essentially a little family of syntactic sugars now because we’ve got…The arrowhead is either single or double, perhaps even the I/O one is in there. Then the tail is either straight or a tilde. It’s whether you’ve got a variable, between those four arrows are enough to deal with. Well, for start, they’re enough to type.

You can write the types of the entire OCaml standard library with just those four arrows. You don’t need any others.

What does tilde double headed mean?

That’s got a variable in and no I/O, and now tilde arrow is now going to be, has I/O plus some other stuff plus a variable. I’ll show you in a second.

You need to recompiling the full library with [inaudible 00:55:03] to have the new types or…?

Oh yeah, I had to…in fact, in two slides, I’ll show you what I’ve done.

So just quickly some things. So here are some operations on arrays and we can see that they continue to just look exactly as they did up there now, or impure functions in the type system. The exception here is set. Set obviously takes an array, doesn’t do any effects, then takes an a, and then does the effect to actually set the thing. In fact, I think a lot of function types look like that. Most things take some number of parameters before they do anything. So you’ve got a lot of double head, double, double, double, and then a single one to save it at the end when it does something.

And then these are some list functions and they all come out pure. So they’ve all got double headed arrows now. This also brings up one other point. We’re going to adopt Haskell’s version of purity here. You’re allowed to raise exceptions, you’re allowed to loop forever, whereas really both of those things should be treated as effects. You’re not allowed to handle exceptions. If you handle an exception then you go into the I/O effect, but you can raise them. The reason is that essentially, it’s nice to be able to assert false in places when you know things aren’t happening without changing your entire program to being impure. I don’t think OCaml’s type system is sophisticated enough to really get into the kind of proofs you’d need to do to make all your code come out pure when you want it to so instead, we just say you can raise an exception, we won’t track that, but you can’t catch it.

The idea here is between these exceptions and the exception effects but throw and the raise is that you start using throw for error handling, for simple errors, and raise for genuinely like, this should never happen, if this happens your program is entirely broken. Things are already kind of strewn to those two broad categories, so now we have specifically two constructs for them. Right so this is coming to…You were asking about the library.

So I converted to show that having these arrows is vaguely usable. I converted the OCaml standard library. It’s actually the slightly earlier version of this work, so the OCaml standard library was, at the time, 101 files and 23,000 lines. I had to make 4,000 lines of modifications. I had all the details for this in an earlier talk that I wanted to go through. Basically half of them were the changes I was actually trying to make. So half of them were changing the type of list.map to being the type I think that list.map should have, which is something that’s pure and moves the effect. About half the changes were some stupid technicality to do with printf which I wouldn’t need to do if the inference was slightly better. I don’t know if people are really that familiar with how printf works in OCaml. It’s this gigantic, like it’s mutually recursive set of GADTs and a load of mutually recursive functions using them. What it does is very, very clever typing that builds up this arrow type that’s going to be the thing you pass off to the format string.

I wanted it to have a variable in every spot so that you could envelop things. That turns out to require a lot of typing, because code like that has got type annotations all over it and they all needed updating. But half of the modifications were that, half of them were things I was actually trying to do. Then a few of the lecture things, some of which were quite interesting. One was I had to remove a load of polymorphic comparisons, because polymorphic compare…This is now very OCaml specific, so I’m going to apologize to the non OCaml people in the room. Polymorphic compare has to be treated as a side effectful operation because it goes through and might read some mutable states, so you have to mark it as a side effect which means switching to a monomorphic comparison.

If you do int equals int, that’s now the side effect and you have to say int.equal the two ints. That’s actually really good because what I want to compare is evil and everyone hates it. So the fact that you get a mark in telling you this horrible thing is happening is definitely a plus.

The changes that I’m talking about here are like this. This is the list module and you can see now that I had to change the arrow here because the length is pure and I want to tell people that length is pure. Basically everything in list is pure unsurprisingly. Here we have iter. It’s all pure arrows and also the effects that happen here, the effects of the function pass in, come out, are performed when you run the thing, which is what you’d expect iter to do. The same for all the higher order ones, so they’re all…

Can you make it bigger again? Sorry.

Oh, sorry. I keep forgetting that this is a small font.

So yes, as I was saying, length is now pure, the double headed arrow. All of these ones are pure and the other one I was showing was iter, which you can now see. They’re all pure arrows. It also says that, any effect that this function does, it’ll come out the other side. Which is what you’d expect. As I said, you can write all of the types of the standard library with just the four arrows that I was describing. I’ve got another one here just to show that’s two of the arrows. We only used two of them in this one. The other two turn up in array, so all the array functions…I’ll go past the ones that define as external. So you init an array, you take an integer that’s the size of the array, a function that’s going to define the re-elements of it, which might do some side effects, and then you perform those side effects and also do I/O you’re creating an [inaudible 01:00:57] you end up with a single head effect, a single headed polymorphic one. Because creating an array is clearly a side effect. That’s what type checking is…Is that a question…?

Yes, so couldn’t the function argument in it be a tilde double head?

So the question was couldn’t the function argument in it be a tilde double head? If you make that tilde double head, everything will type check but what will happen now is you’ll be asking for two ios like you’re saying. When written this way, you’re saying that the function might do I/O and some stuff other than I/O and the result will be also I/O and the stuff other than I/O. If you say that this function does some stuff and this function does I/O and the stuff, if the stuff contains I/O, you get it twice. Do you see what I mean? So where you end up… In practice, it’s very simple, which is basically if the end one is a single headed one then all the other tilde ones are single headed and the end one is double headed. That’s like a subtlety of what you’re describing there. It means that you have to do it that way.

So back to the slides. I should speed up now because I’ve had an extra break. Right, okay so now we are successfully marking the purity of our functions and our types which is nice, but if we go back to our local, this was the example that I used to motivate it, which is this factorial function. It is correctly marked as being an impure function because it uses state. Except is it correctly marked as being an impure function? Because this factorial function is pure. It’s using state, but it’s entirely local. It’s making some state and playing around with it. In the end, it throws it away and gives you a value. It’s somewhat disappointing that we’ve now tainted it by saying it’s impure. Frankly, what it does in private with state is very much it’s own business. It’s not interested in passing judgment.

Why is that the case? If we go back and think about what I was saying before about how we can pretend that OCaml’s effects are all actually secretly algebraic effects and we have these handlers, we have this handler here that handles the local state. Why can’t we just insert one of these handlers in around the body factorial and thus handle it and thus it would be pure. If I ran the code and it was doing these algebraic effects and I put it under this handler, it would come out pure. So, can we do that? And actually, yes we can do that. The only thing we have to be careful of is that thing I was talking about earlier. We want to make sure that all the references we use with this get here, for this find, all the references from that find have to come from this new. The only way we know that they’re definitely in the store, that there is something in the store at that location is because we allocated it. So if you had two of these things, and references were being swapped between them, you’re going to get all kinds of horrible runtime errors at best, some kind of exception at worst.

We don’t want that. What we need to do…So you take the, this is kind of the interface of references, right? We just have a recommended type of mutable references, a type alpha, a way to create them, a way to read them, and a way to set them. What we want to do is, at the type level, to stop that kind of cross pollination of those different handlers, we want to mark the references with which handler they come from to make sure that they’re all used in the right place. What we do, we just give them a signature that looks like this.

What I’ve done is I’ve just taken the reference type and I’ve given another type parameter. This time, so they act much like the bang e earlier and the comma a, it’s just a variable representing a type of kind region. These regions are just representing these imaginary handlers that we potentially have. It’s fundamentally representing a portion of your program that the reference is local to. The type of the reference now tells you, I am local to this part of the program.

Is this like Rust lifetimes?

The question was, “Is this like Rust lifetimes?” They’re certainly quite closely related, yes. This is very much a way of tracking locality in a similar way to how Rust is doing. Rust is tracking the lifetime of the value, right? It wouldn’t let these references escape into the outside world. What we’re going to be tracking is the lifetime of where the value is used, so it’s slightly different. You could escape this reference out into the outside world. You would just be left with something you couldn’t actually run because you would have to run an effect that you weren’t able to. It’s a subtly different distinction, but they’re certainly very related and this is some work that I’m somewhat doing that’s pushing in that direction to use as a kind of borrowing.

As I was saying, we’ve added this marker of where the reference is used and now instead of the I/O effect, this thing’s doing the state effect which is also marked with where it can safely be run. As a final point, to get back the old reference one, we have this region here, this thing we’re allowed to put in for the @r called global and that’s the region that’s the whole program. Before, we had this imaginary handler that was wrapping around the entire program. That’s that handler. Global is that handler. Old references are just these regioned references specialized to run in the global region. So when you use that typing and you type check factorial, we now get the factorial goes from end to end and it does state for…there’s an implicit polymorphism in OCaml here…so this is saying for any region, u, we could run the state.

Since it’s for any region, we could create a local region and just run it in that. For that we have some more syntax. This is where I am going to have to start…. Maybe I’ll just do without spaces. They’re mostly not needed.

I’m certainly not going to indent anything because…Fantastic. Stu might have indented this so he didn’t look stupid. There and now it’s pure. So this new construct “private do done” is just creating the equivalent of this local handler and running the state inside of there. If we tried to lift this reference out, so now our function wouldn’t be pure.

Now this reference is escaping, so this is no longer a pure computation because you can see it. You try and run that, and you get this error message here, which says…It slightly drips off the edge, but you can see that it says, “This expression performs the effect global states, though we were expecting an expression that performed local number one states.” A local number one is our little local region we’ve made for this private, so it’s local to this game and scope. So it’s saying, “You can’t use that reference. That reference doesn’t belong to the private handling you just said you were going to use for all of our state.” This gives you the ability to locally use state without it affecting the purity of your functions, which is very satisfying. Just for the OCaml people in the room who know that reference is always actually just a record of the useful contents. The syntax we use is like this. The definition of rref is…rref is a record with a mutable contents field and we’re mutating it is in the region R. Then you put the region R here.

So then when you try and access such a field, you automatically get the appropriate effect for trying to access this field… it just kind of works naturally and again still bang this one out which is nice. That’s all the bits I wanted to show people. There’s this various interesting bits of future work. I’m going to skip that one.

Regions, as mentioned by the gentleman over there, they’re actually really just a general mechanism for tracking locality through combination of regions and region annotated effects. You can use them to solve other problems the way you care about locality. In particular, I quite like the file access one. You could tag all your files with the region that the file is open. Then your read and write, they perform some file access effect that’s tagged with the region that it’s allowed to be run in. You just get a with_file function down here at the bottom which, because OCaml doesn’t have high rank polymorphism, I have to split it. But basically it takes…If you have something that for any region could take a region file and do some region file access and produce you some value, then you can run it locally. This with_file is going to give you a file and you can only use it in the body of that function. Any attempt to use that file externally is going to perform some effect with some existentially quantified region that can’t possibly be handled. You can use this kind of ability to do locality to solve those kind of problems and get rid of those kind of errors.

Another question?

Would you be able to…take the fact that the two files are distinct from each other or?

Yes, it will show you that the two files are distinct from each other although if you use two files in the same scope, what’s going to happen is you’re going to unify basically the type system is going to stop caring about the innermost of the two because it’s…Yeah, basically it’ll end up unifying so you’ll lose some information. It keeps as much as it needs to ensure locality is ensured which might not be enough if you’re genuinely trying to keep track of exactly which file is which file and so on. I haven’t given that one much thought to be honest.

There’s also more fun things you can do. You can do freezing. So I’m quite tempted, and probably will, split the state effect up into read and write because then you can add, in addition to the global region, we can add an immutable region, which only knows how to handle read. So then you can say that this thing that’s, with @r, you can say, “Oh, if I put the immutable in there I get one that even though the type in general could be mutable, this one is not mutable.” You can also have freezing so you again, you use a for all with a particular r to basically say, “You can use this state locally in this function and then when the function’s over, I’m going to give you back the value except now it’s thing is immutable.” You see how it’s freezing. So you can mutate the value of first and then as long as all the mutations are local, you then get back the value where reading is pure, because you know there are no more writes happening.

You’ve got this period where you can mutate the thing and then afterwards you can’t mutate it. So you can express that like this. It uses higher kinded polymorphism which is why I’ve drifted into functors which might make it a bit too heavy for practical use. I don’t know. I haven’t tried it yet. Just enough room for this, multihandlers are also something I’d like to look at in the future. They’re quite fun. Here, you run not one computation, you run two computations side by side and your handler is allowed to respond to effects from both of them at the same time. This one here is pipe. What we’re going to do, is we’re going to run it and we’re going to run G. When F says send and G says receive, I’m just going to take the thing from F and pass it to G. And then when they’re both done, I’m just going to return the pair of values and then I can…That basically is the cases where it says sent but the other one’s already done or the other one is done and the other one says received. You can decide how to deal with it, but yeah that ability to two things side by side I think is quite interesting. Question?

Yeah, I was actually going to ask about multihandlers and I should probably give the caveat that I’m not particularly familiar with the specifics of OCaml’s type system, so the answer to this question might be, that it’s simply impossible. But I’ve wondered if you’ve considered, rather than using subtyping, representing the set of effects in a multihandler as a sigma type? Either through bi-directional typing, which I would imagine is certainly impossible in this context, or there’s a recent POPL paper on [inaudible 01:13:30] sigma types in a way that it is type preserving. Something along those lines.

It’s worth saying that I’m actually only kind of using subtyping here. I’m mostly getting away with using row polymorphism, which is a [inaudible 01:13:46] and the inference is very much still in the realms of the principle so we’re not in the…I’d be reluctant to move to bi-directional type checking or anything like that. OCaml is, one of its major strengths is its inference is extremely strong so I would…

Right, you do direct inference for some things when you go in that direction…

Yeah. I’m going to ask that…We should talk about it after, because I’m going to need more than 20 seconds to decide about that, because I’m not entirely sure why I would want sigma types there, but it doesn’t mean that I wouldn’t. I just haven’t had someone reveal this to me.

And multihandlers…you can also implement state in a nicer way with multihandlers because you can…This is just a silly example. Let’s just end it…We’re ending on that one. Obviously we were asking questions as we went, but does anyone have any kind of questions they want to ask at the end now?