0:03

That's another very important reason to use reductions. and that gets us closer to

Â our goal of being able to classify the difficulty of problems, and that's to

Â establish lower bounds. so let's take a look at that. So, what we want to do is

Â come up with a proof that a problem requires a certain number of computational

Â steps. and we had an example of that for sorting. We showed in the de, decision

Â tree model, any compare based sorting algorithm has to use at least at least N

Â log N and compares in the worst case. And we show that be showing that no matter

Â what the algorithm is, it has to distinguish between all the possible cases

Â of sorting. And that led us to in fact, a tree that has N factorial leaves on the

Â bottom. And the height of the tree has got to be at least log of that, which is N log

Â N. And you can go back to the sorting lecture and look at that. now that's a

Â complicated argument that certainly took you a little while to understand. and, in

Â general, it's extremely different, difficult to establish lower bounds

Â because it's generally requires a complicated argument like that. You have

Â to be arguing about all possible algorithms and that's very often tough to

Â do. Initially, there was a lot of optimism that we would be able to have lower bounds

Â as researchers worked more and more. And we'd be able to classify problems. This

Â actually worked out pretty difficult to get non-trivial lower bounds for all kinds

Â of computational problems where people thought it would be easy. But the good

Â news is that reduction allows us to take the ones that we have and spread the lower

Â bound. so, for, if we can reduce sorting to a new problem and, without too high a

Â cost of reduction, that gives us an N log N lower bound on that problem. So, let's

Â see how that works. so we have to make sure the cost of the reduction is not

Â high, that's key. so, and actually, most of the time, like in the examples that

Â we've looked at, we used linear time reductions. So, that is we only use a

Â constant number of calls to Y. and we use just a linea r number of steps, so usually

Â we're going through everything in the input and output to do some kind of

Â conversion, and that's what we've done in all the examples that we've seen so far.

Â 2:52

so, then the idea is, so if there's a lower bound for X, and you reduce X to Y,

Â that establishes a lower bound than Y, right? So why is that? If I could solve

Â for Y more quickly, then I could use the reduction to solve X more quickly. so if I

Â have a reduction from X to Y and there's a lower bond of N log N and X, I can't have

Â a linear logarithm on Y. Because if I did and I have a linear time reduction, that

Â would give me a linear time algorithm for X. Same way if I have a lower bound of n

Â squared for X, I can't have an N login algorithm for Y. Because if I, if I did,

Â since I reduced X to Y, then that would give me linear time algorithm for Y. So,

Â the reduction allows us to propagate the lower bound. If I could solve Y, then I

Â could easily solve X but I know I can't easily solve X, so therefore I can't

Â easily solve Y. It's a very powerful technique and really where most of our

Â lower bounds comes from. So, just for an example, let's look at lower bound for the

Â convex hull algorithm. and it's again, convex hull certainly don't seem so

Â related but it's actually the case that in any algorithm for convex hull is going to

Â take N log N. And so we start with a more general statement about sorting. use the

Â so-called quadratic decision tree model. and this is just a detail about the model

Â of computation that makes the idea of comparing a serving algorithm to a, a

Â convex hull algorithm. It makes, it makes them both use the same operation. So quad,

Â quadratic decision tree you get not just these comparisons, but you can use tests

Â like this the, the product of the difference of two numbers. are they less

Â than zero or not? and those are the basic kinds of operations that you're going to

Â use in the in the geometric algorithms. And so, the proposition is that under this

Â model, sorting linear time reduces to convex hull. so that says, if I can com

Â pute the convex hull, then I can sort. since I can't sort faster than N log N, I

Â can't do convex hull faster than N log N. and the proof is not terribly difficult

Â but the implication is really important. So, convex hull algorithms it was just

Â based on that idea, am I making the right turn? that's called a CCW test in

Â computational geometry. I have three points, and going from first to second to

Â third, is that a count, counterclockwise turn? and then, you can implement that

Â test with these kind of quadratic things. It's just testing the slopes of two lines

Â and comparing them. So, it's kind of like a comparison. and, and the implication of

Â the fact that sorting reuses a convex hull means that you can't solve a convex hull

Â fast. and so how do we do the reduction between sorting and convex hull? and

Â again, the, I have a sorting instance, I have some numbers to sort. and what I want

Â to do is create a convex hull instance that gives me sort. Well, all we do is we

Â take the numbers that were supposed to be sorted and we convert them to points on a

Â parabola. so we just take X1 and X1 squared, and X2 and X2 squared and like

Â that. Those are points parabola. now, there's no points and, and we give that to

Â the convex hull algorithm. Now, all of those points are on the convex hull. in a

Â convex hull algorithm, its supposed to return on in clockwise order. and you can

Â see with just finding the smallest that gives us the points in sorted order. So

Â the convex hull algorithm does its job. However, it does it we can take the

Â solution to the convex hull algorithm, and get a solution to the sorting algorithm.

Â Sorting reduces the convex hull. therefore our convex hull can't be easy cuz that

Â would make sorting easy. this kind of thinking is really, is really profound and

Â it has really done a lot to enhance our understanding of the difficulty of

Â different algorithmic problems. so, that's, that's the proof that I just

Â explained. this parabola thing is definitely going to be convex and all the

Â things are on the hull, so we just get the po int that's got the most negative X

Â coordinate and you've got the integers in order. so establishing lower bounds

Â through reduction is really important. we have a big convex hull problem to solve

Â and, and we're wondering, do we have a linear time algorithm for this? It's a

Â quite natural thing to wonder. and so how are you going to convince yourself that

Â there's no linear time convex hull algorithm? one thing you can do, and

Â believe me, a lot of people did this, is just try to find a linear time algorithm.

Â Keep working at it, keep working at it. you're going to use algorithms that are

Â based on this simple kind of comparison between points. It doesn't seem like it

Â should take N log N, it seems like we should be able to find a linear time

Â algorithm. and that's the hard way. The easy way is to know that reduction from

Â sorting, and that means there's no point in to try to put in our effort to try to

Â improve on the Graham scan. Graham scan gets it done in N log N. We can't do

Â better than N log N so we might as well call it a day and move on to some other

Â problem. and that's an example of reduction for proving lower bounds to help

Â us guide our algorithm design efforts.

Â