The next problem we're going to design efficient algorithms for is the Traveling Salesmen Problem. Using this problem, we are going to show the main ideas of the dynamic programming technique and the branch-and-bound technique. Well, recall that the input in the Traveling Salesmen Problem is a complete graph with weights on edges. We usually assume that the edges, that the weights are not negative, together with that budget b. And our goal is to find a cycle that visits at each vertex exactly once and have total lengths at most b. It will be convenient for us to assume that the vertices of our graph are numbered with 1, 2, and so on, n. And that the start of the cycle, and also the end of our cycle, is at vertex 1. To give a third example, as usual consider the following graph with five vertices. Already for this small graph it is not so easy to find an optimal cycle. There is, for example, a cycle going from 1 to 2 to 3 to 4 to 5, and going back to 1. Its length, its total length is 15. There is another cycle whose length is 11 and the optimal cycle in this case has length 9. It is shown here on the slide. So know that a brute force search algorithm just goes through all possible (n-1)! cycles. Why (n-1)!? Well, because initially we start at vertex 1. From vertex 1, there are (n-1) choices where to go next. For all of this (n-1) vertices, there are (n-2) choices where to go next, right? (n-2) because we cannot go back to 1 and so on. So it is (n-1) times (n-2) times (n-3) and so on. This gives us (n-1) factorial and this is the running time of an algorithm which is (n-1) factorial, or roughly n factorial, is extremely bad. It is an extremely small, it is even worse than 2 to the n, even slower. So already for n equal to 10 it is already unfeasible, not to say about n equal to 100, for example. So our goal in this lecture is to design an algorithm whose running time is roughly n squared * 2 to the n. It is, of course, an exponential algorithm. But it is at least much better than going through all possible candidate solutions. It is at least better than n factorial. In particular, If we can see the following fraction, n factorial, Divided by 2 to the n times n squared. Then already for n = 100 this fraction is about 10 to the 120. So which means, that already for n equal to 100, the algorithm that we're going to design is going to be 10 to the 100 roughly times faster than a brute force search solution. As we've mentioned already, we're going to apply the dynamic programming technique to design a faster than a brute force algorithm for this problem. Since we are going to use dynamic programming, this means that instead of solving one large problem, we are going to solve a collection of smaller problems. Usually subproblems that we are going to solve represent just some partial solutions, right? And what is a good partial solution in case we are looking for cycle that visits each vertex exactly once? Well, this is some initial part of a cycle, right? So, when we have some initial part we need to try to extend it somehow, and what information do we need to be able to extend it? Well, at least we need to know the last vertex of this initial part of this cycle. And we also need to know the set of all series or the set of all vertices that we already visited, right? So we know, we know the last vertex of our path and we know the vertices that were already visited. And then we can consider all possible extensions of this path and select the best one. This way we will eventually construct an optimal cycle. So let's formalize this idea. For a subset of vertices S containing some vertex i and also the vertex 1, we denote by C(S,i) the length of the shortest path that starts at the vertex 1, ends at a vertex i. And visits all vertices from the set S exactly once, so namely C(S,i) is the length. If I show this path that looks like this, it goes from the vertex one to the vertex side and all the vertices that it visits are from the set S. And also it visits all of it's vertices exactly once. In particular, we assign C of the set containing just Vertex 1, and Vertex 1 = 0, because this is just an empty box, right? And for any other S that contains, that is of size of at least 1, we assign C(S, 1) to be equal to +infinity because a path cannot end in the vertex in the vertex 1, okay? Then we need to find a recurrence relation expressing C of Si through solutions for smaller problems. For this considers the following thing, so we need to go from vertex 1 to vertex i and visit all the vertices from the set S and we would like to find a shortest possible such path. Consider the second to last vertex on this path, so this is about vertex j. So the last step in an optimal path which we're going to form is from j to i. What can we say about the following parse? So first of all of of course, it must also be a shortest possible path from the vertex 1 to vertex j. But also we know exactly the set of vertices that it visits. It visits the vertices S minus the vertex i, right. So this exactly all vertices except for the last one. So this allows us to express that is a solution for the set S and the feature I for solutions for smaller stock problems as follows. We just go through all possible values of j except for j equal to i. And we select the minimum among the following expressions. So it is stated otherwise, an optimal path going through all cities from the set S and ending in the vertex i is some optimal path that goes through all cities except for i and ends at some vertex j. Extended with an edge from j to i. We do not know the exact vertex j because we are only looking for this. But we just select the minimum over all size possible values of j. We are almost ready to implement the corresponding dynamic problem, an algorithm so the only technical things is the order in which we are going to solve our SAT problems. This order should satisfy the following simple property. When computing a value the value of C(S, i), we already need the values of all the subproblems. It depends on to be computed, right. All such values are of the following form. It is C(S- {i}, j), okay, so in particular S- i always have size less than the set S. So if we just go through all sub sets in order of increase in size then we are safe. This gives us the following algorithm. So given a graph we first initialize C of ({1},1) to be equal to zero. So this just reflects the fact that if we need to find an optimal path, that starts at vertex 1, visits just the vertex 1, and ends at vertex 1, then this is just an empty path, right? We just say at the vertex 1, so it is equal to 0. Then, we go through all possible sets of vertices denoted by S, and we do this in order of increasing size. So we gradually increase the parameter small s from 2 to n and for all subsets s of size s we do the following. First of all we assign the value +infinity to C of (S,1). Because we need to visit each vertex exactly once. So we just mean that this is an invisible path, it is equal to +infinity. We start at vertex one so we should not end at vertex one, okay? Then for all i that lie in the set S, we need to compute, we need to compute the value of C(S,i). For this we can see there's all possible candidates to be the second to last vertex on an optimal pass. So this is done here, The second to last vertex should not be equal to the vertex i itself, so we compute the minimum between the, among the current value and the following value. So this actually says that we first visit all the vertices except for the vertex i and then end in the vertex j. And then we append the edge from j to i. So the total length is increased by the length of this over this last edge. Finally we need to return the best such possible path and we do it by just finding the minimum of the following expression. That is an optimal cycle that starts in the vertex i and ends in vertex 1, I'm sorry. And end in the vertex 1. And is the following, it first visits all the vertices in the graph and then ends in the vertex i. And all these values are stored in the following cells in our table, right? So since we don't the last values in this cycle, we just select the minimum and we also end the last edge inside your cycle. So finally, we compute the result using this expression. There is one technical remark left. In the algorithm we need to somehow iterate over all subsets of our n cities. This can be done as follows, there is a natural one-to-one correspondence between integers from 0 to 2 to the n- 1 and subsets of the set containing integers from zero to n minus one. So, when implementing this algorithm, it is more convenient to consider integers from 0 to n minus 1 instead of integers from 1 to m. Namely this correspondence is defined as follows. If you have an integer k from the following range namely an integer which is at least zero and at most 2 to the n- 1. Then the corresponding set is defined by the positions where this integer has once in binary representation. This is probably easier to show by an example, so consider the following example. Here we consider subsets of three elements of 0, 1, and 2. And for this we consider all integers from 0 to 7. So the binary representation of 0 is 000, right? So this corresponds to an empty set. The binary representation of 1 is 001. So we have 1 in the least significant bit, 0 in the, in the next bit. And 0 in the, in the most significant bit. So this corresponds to just subset containing just an element 0. Or, for example, 3 is represented in binary as As 011. So it has 1 in the least significant bit, and 1 in the next bit. So this corresponds to the subset {0,1}, right? So this way, instead of using c(s,i), we can use just c(k,i), right? Where k is an integer from zero to the n minus 1. The last iterations that we need to do this with is that we need To exclude the element j from the set S. That is, since we are going to represent the set S as an integer, we need to find the corresponding integer. So what does it mean to extract S from the set. This means actually to flip the j-th bit of the corresponding integer from zero to one. And this in turn means, that we are going just to compute the bitwise parity of our current number k, and the number where we have just one in the jth position, right? So we compute the parity of these two numbers, let me probably write it as follows. So assume that this is our number k, and this is the position g where we have 1. And what we need to get is the following, right? So this corresponds to S, and this is S minus the element j. It looks like it follows. So to get this number we just need to sum this number with the following number. Just the number, which have 1, only in position number j, right. And this is, in turn, this is 2 to the j, right. So most modern programming languages allow to do this just in one line. Namely, you do this as follows. You take a number k, then you compute 2 to the j as follows, and then you just take this bitwise parity. So this gives you exactly the set S without the j element. So now you are completely ready to implement this algorithm.