Hi everyone. Today's topic is first order representation of continuations. So previously we discussed continuations as values. So we call that first class continuations and now we are going backwards in the sense that we are representing continuations in a first manner. What does that mean? The relevant part in the textbook is Chapter 17. 1st order representation of continuations. Before diving into the details of continuations as first of the representations. Let's begin with a recursive function named some. Why recursive functions now? We will see this function is named some. It takes one in teacher and produces another in teacher. The comment says that some N means n plus and minus one plus dot dot dot plus zero. So from N to zero, decree menting by one. So if m is less than oracle to zero, then we just return zero. Otherwise we recursive, li call the same function some with a minus one and get the result and add into it. Okay, very simple. Then when you call this function with the argument four, the result is going to be what? 43210. Yeah. Right. But what if we call dysfunction with a large number like what? One million is it then? What's going to happen? Let's see this is a just an example run of the same function definition and then call the function some with the argument four as I said before? Four plus three plus two plus one plus zero is 10. So simple. But when you call this function with this large number like one million? What happens stack overflow error? If you have a large memory? So you do not. If you don't get this table flow error then make the number larger. You're going to get a stack workflow error at some point because you have a finite memory in your computer. So why is that happen? Why does that happen? Because when you call this some four function like this we call some three reclusive lee when we evaluate some three we call some too like this we push the function call stack on top of the main function. Okay. In the beginning we have just nothing in this course tech we have the main function and call some four push that to the call stack and we need to call some three then remember that you are going to get the result from some three and add four to that. Remember that and push cold stack some three like this. So it grows the stag until we get to some zero which does not call anymore. Recursive call and combat pop the cold stick frame and then come back pop the core stick frame like this so stick grows in the shrinks as we call functions and returns from those functions. That's how function calls, how recursive function calls work in function calls death. So that's why we have state workflow error and we do not like that. That's also simple function we'd like to you know avoid this situation. How can you do that? Let's see here's another version of the sum function. How does it work? What does it do actually this? Some function takes an hour argument one more. In addition to this end this input and it gets an additional argument called a C C for accumulation. So when we have the number and which is zero or less than equal to zero, then we just return this A C C the second argument. Otherwise we call this function some reclusive lee with a minus one and here's the highlight add end to this accumulator, meaning that. Can you see this, meaning that when we call this function some with this argument and we either returns this second argument as a result or we call this function recursive li with a minus one and add and to the second argument, meaning that we do not need to come back to this point after calling this recursive function after getting the result of recursive function call. Can you see that one more time? Previous version? We when we call this recursive function call, we had to add end to this result without this this one. Right? That's why we had to push this course tech frame on the call stack and wait for the result and pop the function call and add the add end to the result. That was the previous version. But with this version, you know, we only add end to the second argument. So when we get to the final computation we just return to the second argument because every every number so far is accumulated here, so we do not need to come back here, wow, that's amazing. Right, Can you see that This is called what tail? Recursive version of the previous function? Tail recursive, meaning that the last thing for this function body is calling itself because you call, it does not need to come back to this function caller. Actually it can come back but there's nothing to do with the result. Then why do you want to come back? Right. So you do not need to come back to this function call. So the scholar compiler converts the tail call to a look because it does not need to come back so that it can avoid stack overflow, isn't it? Cool. Right. So in the previous version we had to increase the call stack and decrease the call stack. As long as we call functions recursive, li we increase the cost step frame so the state grows and grows until it gets the, you know, end of the computation and pop up. Pop the call stack frames. But using this tale called version, tail recursive version, when we call this some 40, we are going to what call this recursive function to some three? We're adding four to this one, the next to chris difficult is going to add three to this one and the next call is going to what? Add two to this one. So it's going to be like what? When you get this zero, we do not need to call. Back, come back. We just need to return this accumulated number ten. So the call state does not grow. That's how we can avoid stack overflow error. That's cool, right. This is called tail call optimization, right. And this is, what? Instead of growing stacks with recursive function calls, we can have just no constant usage of stack hold frames. So when we have recursive functions, when we use recursive functions a lot, usually in functional languages online imperative languages. Compilers have take optimizations, so that they can utilize the stick space efficiently. Okay, so first they're good, yes. So now, what we'd like to do is to make our continuation implementation more efficient or more practical. So motivation of today's lecture is we have learned how to implement interpreters in the course, right. How to implement interpreters in this course. However, our implementation is too high level. It isn't scholar too high level. I agree that everything is a function and math, right? We explained the semantics and functions and operation semantics. They are, high level intuitive, easy to understand, but they are not really what computers do. These days computers do not have functions or mathematics. They know only zero and one, everything is in binary. So all those awesome, beautiful, first class continuations may not be representative that they are in the computer and memory, right. We saw this recursive function version which was very mathematical in intuitive. But that was not practical because just one million calls gave us a state workflow. In order to avoid that disaster we had to rewrite the sum function to have an extra argument. Or our computer should know how to optimize the use of step usage, step calls, right, efficiently when we use recursive functions. So let's try to implement such an implementation, right. Today we are going to revise our interpreters to use only low level concepts, numbers and go twos. Instead of those high level concepts like functions and mathematics, let's see. So this is a journey to the motion world instead of those high level mathematical word with functions and operation semantics. We are going to pull that down to low level. The step one is we are going to use simpler representation of continuations instead of functions. Remember continuations were functions in the previous implementation. In today's implementation we are going to use non function representation for continuations. You will see, from next lecture we are going to replace symbols meaning names with numbers because it's binary. And then we're going to replace function calls with co two's, really? Yeah, in the next lecture, after that we can go on, replace data ties with lists, replace pairs with memory allocations. And introduce deallocations and garbage collection and all that. We can discuss that in the next next lecture. And in this lecture we are going to talk about only step one, simpler representation of continuations. This is the previous version. All the version, building continuations out of functions, right? As I said, previous continuation count was a function from value to value. It takes a value and produces a value. And in this inter function it takes three arguments on expression to evaluate on environment to use for looking up names and the continuation K. The work to do after evaluating this expert. So when we interpret this add expression we call interp with the first of expression and environment. And we had this continuation, remember, yeah. And for this second self expression we call interp the second self expression given environment. And what? The second continuation. When we have those first expressions value and the seconds of expressions value, we add them and call K. That was the previous implementation. And now, what are we going to do today? What do we mean by a simple, simpler representation of continuations? It's just a type called count. How do we divide it? You will see that in a minute. In this version we are not going to make a function, right? So this cont, K, it's not a function but the type and it has some variants, right? So when we have evaluate add, we first call this interp function with this first of expression and environment. And some value representing that continuation. We call that add second K. And it captures three pieces of information. The seconds of expression and the environment and the work to do. So far so good. Yeah, it's just a simpler representation of previous thing. Do you see that, in the previous version we had this, first of expression. This environment and work to do as a function. But in the new version we have interp function with first of expression, environment. And the work to do not as a lambda expression but as some data, meaning what we're going to do is add the second K. And previously because when we capture this work to do, we had to have the seconds of expression and the environment. And what? The work to do. In the new version we capture that as the, what first argument with this second sub expression. The second argument with this environment and the continuation to do. So far so good. Yeah, let's see more example. And in the previous version because continuation is a function when we are done with the evaluation of the expression we call K. Using the value. How about in the new version? In this new version because continuation is not a function we cannot call k. Then what can we do? Instead of calling k, we are going to call some function name to continue with that k. Do you see the difference in the previous version because k is a function this lambda function, right? We just call it right away directly. But in this new version because k is not a function but a data, right? But some data we just call some function name to continue using that argument, right? Yes. Then how can we connect them? How can you use them? We can make this inter function in this way. Remember in the implementation of the an interpreter we had largely two cases. The first cases is to make values then do the next thing to do. Like number Numv function, expression, closure, V and I D look up the name from the environment. So when we have those values in the previous version we call k with these values, right? And the second case is what when we interpret an expression we do not have the value right away because we do have sub expressions like addition, subtraction and function call. In this case we reclusively call inter function passing the work to do next as a continuation. That was the previous version. It remains the same in this version as well. But only the representations of continuations are different. In this version when we have a value like NumV and ClovE and the value of an identifier from the environment what we do is called continue using the value and the work to do. Do you see that? In the previous version, we are going to say just call k with this NumV n but in this version because k is not a function, we can continue with k and the value. In the previous version, when we have this expression with sub expressions we call inter with that expression and some lambda expression to represent the continuation. But in this version because k is not a lambda expression not a function we make another value to represent this lambda expression. This work to do. Okay so far so good. So what's going to the continue function look like? This is the definition of the continue function. The continue function as we said it takes a continuation and a value. And when the continuation is MtK, do you see this pun MtK? This is MtK. [LAUGH] MtK. Then there it means that there's nothing to do with this value. How we are done. Just return it. When the continuation is AddSecondK, we know that AddSecondK represents to evaluate this second self expression of addition using this environment and then do k, right? So what we do is I know I'm going to interpret this seconds of expression under this environment and what is the work to do next? That's going to be DoAddK. Yet another continuation which is going to use the value, right? The value of the first of expression and this k. This work to do after the addition. So far so good. Yes, this is a simpler version. Then previously, as I said before, when we have work to do, we represented that as a lambda expression. The continuation was a function. It was just this lambda expression. Then what is it going to be in this new version? It's going to be data time. And as we saw with some examples, this Cont is a new type for continuation. We saw this MtK, right? We are down thing the top level identity continuation or AddSecondK with the seconds of expression and an environment and work to do next. Meaning that how can we construct these data? We can count every lambda used to generate the continuation. Remember. Yeah. And add the corresponding variant to Count. We counted lambda for audition there was one and two. So we generate each continuation data for those lambda expressions AddSecondK and DoAddK. And for each data, it has what? Three pieces of information or two pieces of information depending on its free variables in the lambda. Okay, so in the previous for so that's how we can define these data. In the previous version if you remember we had this first lambda expression. This is going to be well AddSecondK. The second lambda expression this is going to be DoAddK. What do we mean by free valuables? Here rv is bound. So what are free lv and k? That's why DoAddK has two fields. How about the other one? Yeah, this AddSecondK, lv is bound. It has what r is free, env is free, and k is free. Okay if you do that we can make this definition. This MtK for the top level AddSecondK and DoAddK for addition. SubSecondK and DoSubK for subtraction. AppArgk and DoAppk for function call. I strongly recommend you to do this by yourself. Just like I did for addition. Okay, enjoy. Now quiz. Consider the following interp using simple representation of continuations. Yeah, we already saw these definitions. Which of the following is the correct code to evaluate e? What do you think? Yeah, let's see. Here interp function. Yeah. Interp function takes an expression and as you can see interp function takes an expression, an environment, and the continuation. The environment is all empty. What is going to be the top level continuation? Identity function MtK. And with open close or MtK. Nothing or empty list. What do you think? Yeah, the answer is MtK. Do you see that because we define this MtK without any arguments. So it's going to be like this. Thank you. [MUSIC]