Here is a very important way of constructing associative operations. We start from two functions, f1 and f2, each of which is associative. F1 is a binary operation on elements of type A1. F2 is a binary operation on the type A2. We will then define function f that operates on the pairs of elements from A1 and A2. And is also a binary operation. So it takes two arguments. The first one is a pair. And the second one is a pair, and it returns a pair. How do we define it? In, pretty much, the only natural definition given, the signatures, the function f1, f2, and f. Given an argument, x1, x2, and argument y1, y2. We will construct the pair consisting of two values where the first value's computed by applying function f1 to x1 and y1. The second one by applying f2 to values x2 and y2. Now we claim that f is also associative. Here's the rather straightforward but somewhat long derivation here. We start with the left-hand side of the associativity law. We see that f is nested in the appropriate way here. We can expand the definition of this inner f. Into application of f1 and f2. Now this means that the outside f applies to two arguments, each of which is written as a pair. So we can expand that definition. And now we have expressed f in terms of applications of f1 and f2. At this point, we can apply the fact that f1 and f2 are associative. We can therefore change the nesting of applications of f1 to the one on the right-hand side of the associativity law. And do a similar transformation for f2. We then simply apply the definition of f backwards. We recognize that what we have here is an application of f to the argument x1, x2. And, This other argument which in turn is application of f to the pair of y1, y2 and z1, z2. In the end we obtain the shape on the right-hand side of the associativity law. So we have shown that f defined in this way, Is in fact associative whenever f1 and f2 are associative. Here's a simple application of the previous observation. Suppose we use 32-bit numbers to represent numerator and denominator of a rational number. We can then define multiplication that works on such pairs by simply multiplying the numerators and multiplying the denominators. According to this function definition here. Now because what we have done is applied multiplication to both components of the pair, and multiplication is associative even modulo 2 to 32 the resulting function times is also associative. A prototypical example of applying reduce operation is computing the sum of elements in a collection. Suppose that we want to compute the average of elements in a collection. How could we do that? One way is to, first, compute sum by reducing the collection with the plus operation. And then compute length. If we do not have a separate way of computing length, one way to do that is to map the collection into a collection of constant elements, collection existing of ones, for every element of the original collection. And then simply compute the sum of such collection. The resulting collection will have as many ones as there were elements in the original collection. This will give us the length of the collection, so we can then divide sum by the length to obtain average. Now this computation invokes reduction twice. Is there a similar solution that uses a single reduction? Can you find it? The idea is to use associative operation on pairs. The first element of that pair computes sum of, some subset of elements in the collection and second one computes number of elements. It turns out we can combine both the sum and the length. The number of elements using plus operation. Therefore, the resulting operation is associative. To get the effect of computing sum and length, we can then map the collection, consisting of some elements, by taking element x in to a pair of that same element x and 1. So this 1 will be there in order to compute the length. Then we reduce the resulting expanded collection with function f that was defined before. So we'll be adding up all the elements x. We'll be adding up all the 1s. What we'll obtain,in the end is a pair of the sum and the length of the collection. So we can again compute sum divided by length. We have emphasized that commutativity does not imply associativity. However there are some cases when having commutativity can help us establish associativity but we need some additional property. Let's define expression E (x, y, z) to stand for the left-hand side of associativity law. We will say that the arguments of E can rotate if E(x,y,z) is in fact equal to E(y,z,x). So conceptually y moves to the first position, z moves to the second position, and x then goes to the last position. In other words, the condition we require is the following. Note that this condition has the same shape of application of f, but the names of variables are changing places. Now, we claim that if f is commutative and the arguments of E rotate in this way, then f is, in fact, also associative. This turns out to be a very simple observation. If we start from the left-hand side of associativity law and then apply the fact that arguments can rotate, we obtain the following expression. Now we will just apply commutativity of f, which we have also assumed. So, we can swap f(y,z) as the first argument and x as the second argument. And what have we obtained? We have obtained the right-hand side of the associativity law. So, that means that these two properties in fact imply associativity. Even though the condition of rotation may seem difficult, turns out to be easy to recognize visually in many examples. Let's define now addition on fractions where there addition multiplication is defined again modulo 2 to the 32. Given that we can have modular arithmetic and overflows in both numerator and denominator. Which simply end up wrapping around, modulo 2 to the 32. It may not be obvious whether such function plus is associative. Turns out the function is associative. We'll use the previous idea to show that. First we'll observe that plus is commutative. Indeed, if we swap the rows of the 1 indexed variables, and 2 indexed variables, we'll obtain, We can see that the second components are the same, because of commutativity of multiplication. And that the first components are also the same. This is because of the commutativity of addition that we can also assume even with modular arithmetic. Now that we know that plus is commutative we can try to show that the arguments of E rotate. So let us compute E and apply to pairs x1, y1, x2, y2, and x3, y3. By expanding the definition of E, which is always of the same shape given f, in this case, plus, we obtain, in several steps, the following expression. Now, if we rotate the arguments, we can again apply what we have observed here just by substituting variables. So instead of 1s we write 2s, instead of 2s we write 3s, instead of 3s we write 1s. And we continue like this because we have expanded here definition of E in general it applies also to these other values that happened to be the rotated versions of the original ones. But now we claim that the result is in fact the same. We can see that the second components of the pair are the same because of properties of multiplication. What about the first component? Now we have three parts of the addition, and it turns out that the first one corresponds to the third one in the new expression. The second one corresponds to the first one in the new expression. And the third one corresponds to the second one in the new expression. So now we have shown that rotating arguments of E gives in fact the same expression. And because class is also commutative, you obtain that f is a associative. Here is another example. Suppose that u and v are rational numbers in the open interval between (-1, 1). We can define f using the following expression. This is a nice symmetrical expression. And it is straightforward to verify that, if u and v are in this interval, then the result is also in that interval. In fact, this is not an arbitrary expression. It corresponds to addition of velocities in special relativity theory. And these velocities are in one dimension and normalized to the speed of light. Now it is easy to see that f defined is such way is commutative because u and v play entirely symmetrical roles. Let us then compute the value of E with three arguments, so we have this nested application. Then we combine with w after we bring everything to 1+uv. We attain u+v +w+ uvw. And below 1+uv from here. And then uw + vw. Again we see that we obtain an expression where every of the three variables plays the same role. In some sets no variable is special. And because of that, if we swap the roles of some of these variables, we will obtain exactly the same expression. In other words The arguments of this expression E, the arguments of E do rotate because of the rotation and commutativity, we obtain that f is associative. This was arguably slightly easier than directly checking associativity law. Now, it is tempting to implement such an operation using floating point numbers, and we can do that. But we have to keep in mind that floating point operations are not associative. So our calculations on the previous slide will not hold. And even though we could argue that the calculations will be only slightly off in one step. If we combine many operations, f, over a long collection, then these errors will accumulate and we can obtain substantially different values for reduce versus, for example, reduceLeft on the same collection. This will not happen if for example we use irrational numbers to represent these values. Let's now look at the family of operations that are defined using a closure operator. We'll look at operation f defined on sets such that f(A,B) = (A union B)* to which we apply a closure operator. By closure operator we mean operator that creates a bigger set that is monotonic. So if A subset of B, then A* a subset of B*. And that makes the set in some sense as big as it can. So that applying the close operator twice is the same as applying it once, we'll call the last property idempotence. In practical application we might not be computing with the sets of elements exactly but we might be computing their discrete representation. Examples of such operation, and to corresponding closure operator is convex hull of a set of points or Kleene star for regular expressions. Now, the claim here is that f defined in such way is always associative. We never define f in such a way, and the three properties of the closure operator hold. Now, we can immediately observe that f is commutative. F over AB is equal to f over BA, because of the definition of f. Now, if we look at the left-hand side of associativity law, we see that it expand to ((A union B)* union C)*. And we claim that this is in fact equal to (A union B union C)*. Okay, so this is the main thing that we need to show. Because if we show that, then A, B and C play entirely symmetrical role. So we will have that the arguments rotate and together with connectivity will obtain associativity of f. To show the equality we'll show two subset inclusions. Here's the first one. Now obviously if we have two elements in a union and we add a third element we'll get a bigger set. So that's a trivial property in naive set theory. Now we can use monotonicity on the closure operator and simply apply * to both sides of this equation. On the other hand, if you have set C we can add A and B to it and we'll get a bigger set. And then if we apply closure to this union we'll get something even bigger. This is one of the assumptions of the closure operator. What have we shown now? We have shown that both (A union B)* is included in this (A union, B union C)*. And we have also shown that C is included in that same set. If you show that one set is included and another set is included in some bigger set, then the union of these two sets is also included in that bigger set. So we obtain the following conclusion. Now we can apply monotonicity so we apply * to both sides of this subset inclusion. We have * here. We also get the * on the right-hand side. But two * collapse into one *. That's one of the properties of the closure operator. So we have in fact shown what we were aiming to show. And this is one side of the inclusion. Let's now show the other side. This is even easier. We want to show that (A union B union C)* is included into an expression where we have somehow additionally applied * to (A union B). Intuitively, that should be even bigger set and everything is monotonic. Indeed, A union B is subset of (A union B)* because * makes things bigger. When we add C to both sides, the set inclusion still holds and then we just apply * to both sides. And we obtained exactly what we were supposed to show. And now we have shown both sides of the inclusion. So we have shown, in fact, that our property holds. Function f defined by, when * is a closure operator, is in fact an associative function.