0:00

I'll prove this performance guarantee for you on the next slide.

Â But let me first make a couple of comments. So first of all, the analysis

Â of this algorithm stated in the theorem is tight.

Â The 50% cannot be improved without making additional assumptions about the input.

Â Indeed, here's an example, which is itself, is even I bipartite graph.

Â So it's a tractable special case of maximum cut.

Â But even on bitartike graphs, the local searh heuristic might return to you a cut

Â which has merely 50% of the edges of a globally optimal, maximum cut.

Â The example is drawn here in green. There's just 4 verticles and 4 edges.

Â As I said it is a bi-parted graph, so the best part is to put U and W in one group

Â and V and X in one group. That cuts all 4 of the edges.

Â On the other hand, one possible output of the local search algorithm is the cut

Â where U and V are in 1 of the groups. A, and W and X are the other group B.

Â So this cut has only 2 crossing edges, only 50% of the maximum possible, yet it

Â is locally optimal. If you take any of the 4 vertices and you

Â switch it to the other group, you get a cut with 3 vertices on one side and then

Â the vertex by itself, since every vertex in this graph has degree 2, all of those

Â cuts are also only going to have 2 crossing edges.

Â The second cautionary tale that I need to tell you is that you maybe shouldn't be

Â super impressed with the perofmance guarnatee of 50% for the maximum cut

Â problem. Indeed, even if I just took a uniform at

Â random cuts, by which I mean for each of the invertices I flip Independently a

Â fair coin. If the coin comes up heads, I put that

Â vertex in A. If the coin comes up tails, I put that

Â vertex in B. The expected number of edges that cross a

Â random cut chosen in this way, is already 50% of the edges of the graph.

Â Just for kicks, let me sketch a quick proof of this fact.

Â Some of you may remember from part 1, I introduced a decomposition principle for

Â analyzing the expected value of complicated random variables.

Â You have a complicated random variable that you care about, what do you do, you

Â express it as the sum of indicator random variables.

Â Random variables that only take on the values of 0 and 1.

Â When you apply linearity of expectation that reduces computing the complicated

Â expectation, to a sum of simple expectations.

Â So it turns out that decomposition principal works beautifully to prove this

Â fact The complicated random variable that we care about is the number of edges that

Â cross a random cut. The constituents that are indicator 0 1

Â random variables are just whether or not a given edge crosses a random cut.

Â That is for an edge E of this input graph G, I'm going to define the random

Â variable X of E to be 1 if it's 2N. Points wind up in different groups, and 0

Â if its 2 end points wind up in the same group.

Â So what's the expected value of one of these indicator random variables x sub e?

Â Well, as with any indicator random variable, that's just the probability

Â that x of e=1, the probability that this edge e is cut by a random cut.

Â What's the chance of that? Well lets say the endpoints of edge E, are u and v.

Â There is 4 possibilities; u and v could both be in A, u and v could both be B, u

Â could be in A and v could be in B, or u could be in B, and v could be in A.

Â Each of those 4 outcomes is equally likely, probability of 1/4.

Â In the first 2 cases, this edge is not cut by the random cut, the 2 endpoints

Â are on the same side. In the other 2 cases, it is cut, they're

Â on different sides. So it's a 1/2 probability that this edge

Â is cut in a random cut, therefore the expected value of X sub B is 1/2.

Â And now we're good to go just by applying linearity of expectations.

Â So, precisely, what do you care about? We care about the expected number of edges

Â across a random cut. Well the number of edges crossing a cut

Â is just by definition the sum of the axes over all of the edges E.

Â X E is just whether or not. Now the given edge crosses the cut.

Â So the expected value of the random variable we care about the number of

Â crossing edges that's by linear [INAUDIBLE] expectation.

Â Just the sum of the expected values of these indicated random variables the X of

Â e's. Each of those has value one half, we're

Â summing up one for each of the edges. So the total of the sum is just the

Â number of edges divided by 2 and as claimed.

Â Thanks for indulging my little digression.

Â Again, the point of which is that just taking a random cut digital performance

Â guarantee of 50% for the maximum cut problem.

Â But, in the defense of local search, which also only gets a 50% performance

Â guarantee, it took a while and in fact you have to work pretty hard to do, to

Â get a performance guarantee better than 50% with a polynomial time algorithm, for

Â the max cut problem. The most famous such algorithm is by

Â Gilmers and Williamson. That took until 1994 and it requires a

Â tool called semi-definite programming, something that's even more powerful than

Â linear programming. Let's now prove that local search is

Â guaranteed to output a cut whose number of crossing edges is at least half.

Â The total number of edges ion the graph. So pick your favorite locally optimal cut

Â A-B. And here by locally optimal, I just mean

Â something that the algorithm might return That is, a cut for which it is impossible

Â to swap a single vertex from one side to the other and improve the value of Of the

Â cut. By virtue of being locally optimal it

Â must be the case that furthest cut AB and for every single vertex of V, the number

Â of edges incident to this vertex that are crossing the cut is at least as large as

Â the number of vertices incident to this vertex that do not cross the cut, so the

Â previous notation that is C sub V. Is at least as big as D sub V.

Â If not, swapping V would give us a better cut.

Â So we have N of these inequalities, one is for each vertex V, so we can

Â legitimately sum up those inequalities combining them into one.

Â Now, let's focus first on the right hand side of this summed up inequality, so the

Â sum over all of the vertices in the graph.

Â Of the number of edges incident to the vertex that are crossing the cut.

Â Now here's the main point, what is this sum on the right hand side counting? It's

Â counting each edge that crosses the cut AB exactly twice.

Â Consider an edge, say U,W which is crossing the cut AB.

Â It gets counted twice in the right hand side, once when V=U and once when V=W.

Â We can apply exactly the same reasoning to the left hand side.

Â What is this I'm counting? It's counting each non-crossing edge exactly once.

Â Consider an edge against a u comma x who's both endpoints are on the same

Â side. That's going to be counted once in this

Â sum when v=u and then again when v=x. Now we want to compare the number of

Â crossing edges of this locally optimal cut to the total number of edges.

Â So on the left hand side we're missing the crossing edges, so to complete that

Â into all of the edges let's just add double the number of crossing edges to

Â both sides of this inequality. On the left hand side, we get double of

Â all of the edges and now on the right hand side we have 4 times the number of

Â crossing edges. Dividing both sides of this inequality by

Â 4, we see we've proved the theorem. The number of edges that cross A,B is

Â indeed a full 50% or more of the total number of edges in the graph.

Â It's interesting to discuss briefly how the conclusions we've reached for a local

Â search for a max cut extended to a natural weighted version of maximum cut.

Â So which facts about the unweighted special case extended the weighted case

Â and which facts do not? Well it still makes perfect sense to take a local

Â search approach to the weighted version of the maximum cut problem.

Â Now for each vertex in a given cut, you just look at the total weights of the

Â incident edges that are crossing the cut and the total weights of the incident

Â edges that are not crossing the cut. And whenever the weights of the edges not

Â crossing the cut is strictly bigger, that's an opportunity to improve the

Â current cut, by moving the vertex to the opposite side.

Â One cool thing is that the performance guarantee that we just established of 50%

Â for the output of local search, that carries over to the weighted case and the

Â proof remains essentially the same and I'll leave it for you to check that in

Â the privacy of your own home. Now, it's also true that in the weighted

Â case a random cut still gets 50% of the total weight of the graph.

Â So, perhaps this performance guarantee is not Nothing really to write home about.

Â What breaks down is the running time analysis.

Â Remember how this went for the unweighted case.

Â We argued that there can be at most n choose 2 edges crossing any cut, and

Â since every iteration of local search increases the number of crossing edges,

Â it has to halt within a quadratic number of iterations.

Â And that means it's easy to implement the algorithm.

Â In polynomial time. One way to think about what's special

Â about the unweighted version of the maximum cut problem is that even though,

Â as we've seen, the graph has an exponential number of different cuts, all

Â of those exponentially many cuts only take on a polynomial number of Different

Â objective function values. The number of crossing edges is somewhere

Â between 0, and n choose 2. By contrast, once the edges can just have

Â any old weights, now, you can have an exponential number of cuts, with an

Â exponential number, of distinct objective function values, of distinct values for

Â their total weight. That means, just because your strictly

Â improving the total weight crossing the cut in each iteration It does not imply

Â that you're going to converge in a polynomial number of iterations.

Â And in fact, it's a highly non-trivial exercise to prove that there are

Â instances of weighted maximum cut, in which local search takes an exponential

Â number of iterations before converging.

Â