Let us now examine a more flexible construct for describing parallel computation. Consider the invocation parallel(e1, e2). This will compute e1 and e2 in parallel and return the results, is values v1 and v2. Using a new construct which we call task, we can describe the same computation in the following way. We create a task t1 that computes e1, then we create task t2 that computes e2. And what this does is it spawns a computation of t1, we continue, then we spawn the computation of t2 and also continue. And at this point, we ask to obtain the value of t1 using this t1.join, and this is, this point here, the computation. And then we join with the value of t2, and now we have both values, v1 and v2. So in general, this construct task takes a call by name parameter e, and returns a task t. It proceeds by computing e, so to say in the background, which means that it returns a back to the original computations, right. And once we are actually depending on the value of this expression e, that we are computing, we can obtain this value using t.join. What does t.join do? Well if the value of e is not computed yet, t.join will simply block and wait until the value is computed. If it's already there then it's going to quickly return it. And so, if you call t.join multiple times, then subsequent calls will return quickly. Here's a minimal definition of the interface for tasks. As I mentioned, tasks takes a call by name parameter. Let's call it c, and it returns something, let's say, let's call it Task[A]. And the important thing about Task[A] is that it has a method join that returns back A. So this construct, a lower case task and join establish a map between computations and tasks that perform these computations. Therefore, if we create a task with task(e) and later join this task, the value that we obtain should be the same as evaluating e. But of course, the benefit is that this computation of task(e) is proceeding in parallel. If we wanted to omit writing the join in some cases, we could also define an implicit conversion in the following way. If the getJoin takes a task, it automatically applies the join methodology. Let's see an example of expressing some of the patterns we have seen before using task. We have seen how if we wish to compute for some segments in parallel, we can do it using the parallel construct by first defining computation that computes two segments in parallel, here. Then it computes another two segments in parallel, and then these two are computed in parallel using another parallel construct. What this will return is then a nested pair of pairs that contains the four values we are interested in. Then we can, of course, sum up these four values and raise them to the power of p, or 1 over p, or whatever we are interested in. Now, essentially the same computation can be expressed using task as follows. We have four tasks that we are interested to compute in parallel, so we define tasks, t1, t2, t3, and t4. They run in parallel in the background, and then we take their values and sum them up. Here, you can imagine that each of these t1, t2, t3, and t4 that participate in the sum is in fact t1.join, then t2.join, t3.join. So we do not need to worry about using the value until we actually needed for some subsequent computation. Now we have seen two different constructs for defining parallel computation, the construct called parallel and the construct called task. A natural question is can we define one of them using the other? In particular, can we implement the previous parallel construct if we are given the construct task. Recall that this is the signature of parallel, it takes two call by name parameters. So, can we define the body of parallel ourselves using task? Here's one solution, because we want to compute computation cA and cB in parallel, let's take one of them, in this case cB, and start a task that computes it in parallel. Using the task construct, we immediately continue with our own computation in the thread of this function, and then we can compute tA directly. This is what this line is doing here. So remember cA is a call by name parameter, so we are actually here evaluating it and storing the result into here. The fact that tB is a task whether tA is just a value, reflects the fact that by the time we use tA, it will be computed. So we're going to return tA as the first component of our pair. And then for tB, in order to obtain the actual value, we need to call tB.join. Suppose that I have attempted to define parallel in the following way. But this definition is wrong. Can you see what is wrong with it? In particular, does it type check? And if it does type check, does it behave as expected? Well it turns out that even though this alternative definition would compile, in fact, it does not give us the benefits of parallelization. Whereas in our correct definition of parallel, respond the task B, and then at the same time, compute task A, so A and B proceed in parallel. In the second version, in the parallelWrong version, as soon as we have started the task, we immediately called .join on it. And as a result, we are basically waiting for it to be computed, just as if we wrote cB here without all the task and join. So, in parallelWrong, in fact, does not achieve parallel computation of two parameters, cA and cB.