Let us now look at how we can take the concepts we've learned in this course and apply them to an interesting concurrent algorithm called minimum spanning tree. So undirected graphs can be used to represent all kinds of networks. It could be roadways, train ways, air flight routes, and so on. And a spanning tree is a data structure that given an input undirected graph, creates a tree with a subset of edges from the graph that connects all the nodes. So here's one example of a spanning tree and here's another example of a spanning tree. Now what might happen is some of these links may be more expensive than others. So let's say, for example, these edges have cost 1 and this edge from A to B has cost 2. We can evaluate the cost of the spanning tree, so over here, we would have 1, 2, 1. So here, the cost would be 4. And here, we would have 1, 1, 1 and here, the cost would be 3. So this second tree happens to be the minimum spanning tree for this graph. Now the question is how can we design an algorithm, and in particular a concurrent algorithm, to compute the spanning tree given an undirected graph as input? Well, there's an old sequential algorithm due to Boruvka that looks something like this. You initialize a list with all the vertices and you iteratively perform what is called edge contraction, which goes something like this. While the size of list is greater than 1, while you have at least more than one vertex in the list, what we'll do is we'll just pick one vertex, let's say N1 from the list. We find the edge with the minimum cost. So let's say, GETMINEDGE. And then we find the neighbor N2. For N1, which is the vertex, the neighboring vertex with the edge E. And then what we do is we create a new vertex N3, which is the merge of N1 and N2. So if this algorithm was performed, for example, if we picked vertex C, well, both its edges are minimum cost. So it could pick A and it could do a merge over here, To get a reduced graph with A and C combined into one vertex. So that's what N3 is and then basically, we remove N2 and insert N3. So that completes the merge step. Now how would we do this in parallel? Well, because there are multiple threads operating on the graph, we have to look out for the fact that two threads may collide on the same vertex. For instance, even if two threads started with vertices A and D, they may both end up with C as the corresponding neighbor with the minimum cost edge. And then you don't want a buggy program where the algorithm is trying to both merge C with A and C with D. So to avoid that, there are number of mechanism that could be used. We could use isolation but let me illustrate it using locks. So what we would try to do is we would try to lock N1 over here. So over here, we would have a tryLock of N1. And if that fails, if that returns false, we need to just continue. So if that's false, we just need to continue and retry the loop. And there has to be some fix up, like you have to reinsert N1 back into the loop. Similarly over here, N2 you try to lock N2 and if that fails, then there too again, you need to do some fix up and then continue. So once you've done the locking correctly and you have an appropriate concurrent data structure. So the concurrent linked queue is a great candidate for this. And you've got these fine grained locks. You can have multiple threads attempt a merge in parallel. And what you typically see is that if you have a very large graph say, with a million vertices and you're running on 16 cores, initially, there won't be many collisions. You may be lucky and the edges being contracted are non-interfering and each thread can acquire the lock. But as the graph gets merged and you have fewer and fewer vertices, you're going to have more collisions on the tryLock, meaning there are going to be more cases when tryLock returns false. And so sometimes, if you want to get better performance from your algorithm, you may switch to a different, less optimistic algorithm at that point. So what we've done is we've shown how the principles of optimistic concurrency combined with concurrent data structures can enable us to implement very interesting algorithms, such as a concurrent implementation of the minimum spanning tree by parallelizing Boruvka's algorithm. And with these building blocks, you're well on your way to implement your own concurrent algorithms.