The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

Loading...

From the course by Stanford University

Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming

437 ratings

The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

From the lesson

Week 4

Advanced dynamic programming: the knapsack problem, sequence alignment, and optimal binary search trees.

- Tim RoughgardenProfessor

Computer Science

So now that we understand the structure of

Â optimal solutions for this optimal binary search tree problem.

Â That is, we understand how an optimal solution must be one of a relatively

Â small number of candidates. Let's compile that understanding into a

Â polynomial time dynamic programming algorithm.

Â Let me quickly remind you of the Optimal Substructure Lemma that we proved in the

Â previous video. Suppose we have an optimal binary search

Â tree for a given set of keys, one through N, with given probabilities.

Â And suppose this binary search tree has the root R.

Â Well then it has two sub-trees, t1 and t2.

Â By the search tree property, we know exactly the population of each of those

Â two sub-trees. T1 has to contain the keys one through r

Â - 1. As usual we're sorted, we're assuming

Â these are in sorted order. Whereas the right sub-tree t2, has to

Â contain exactly the keys r + 1 through n. Moreover, t1 and t2 are, in their own

Â right, valid search trees for these two sets of keys.

Â And finally, and this is what we proved in the last video they're optimal for

Â their respective sub-problems. T1 is optimal for keys one through r

Â minus one and the corresponding weights or.

Â Abilities and T2 is optimal for R plus one through N and their corresponding

Â frequencies. So let's now execute our dynamic

Â programming recipe. So now that we understand the way in

Â which an optimal solution must necessarily be composed in a simple way

Â from solutions to smaller subproblems. Let's take a step back, and ask, well.

Â Given that, at the end of the day, we care about the optimal solution to the

Â original problem. Which subproblems are relevant?

Â Which subproblems are we going to be forced to solve?

Â For example, with independent sets in line graphs we observed that to solve a

Â subproblem we needed to know the answers to the subproblems where we pluck either

Â one or two vertices off of the right hand side.

Â So overall what we cared about was subproblems corresponding to prefixes of

Â the graph. In the knapsack problem we needed to

Â understand subproblems that involved one less item and possibly a resus, reduced

Â residual knapsack capacity, so that led to us caring about solutions to

Â subproblems corresponding to all prefixes of the items and all integer

Â possibilities for the residual capacity of a knapsack.

Â In sequence alignment, when we looked at subproblems.

Â As we were plucking a character off of one or possibly both of the strings.

Â So we cared about subproblems corresponding to prefixes of each of the

Â two strings. Now, here's one of the things that's

Â interesting about the binary search tree problem which we haven't seen before.

Â Is that, when we look at a subproblem. In the optimal structure lemma, there's

Â two that we might consider. We don't just pluck off from the right.

Â We care about both the subproblem induced by the left subtree.

Â And that induced by the right subtree. In the first case, we're looking at a

Â prefix of the items we started with. And that's like we've seen in our many

Â examples. But in the second case, the sub problem

Â corresponding to t sub two. That's actually a suffix of the items

Â that we started with. So put differently, the sub-problems we

Â care about are those that can be obtained by either throwing away a prefix from the

Â items that we started with or throwing away a suffix from the items that we

Â started with. So in light of this observation, that the

Â value of an optimal solution depends only immediately on sub problems that you

Â obtain by throwing out a prefix with a, or a suffix of the items, what I want you

Â to think about on this quiz is, what is the entire set of relevant sub problems?

Â That is, for which subsets s of the original items one through n is it

Â important that we compute the value of an optimal binary search tree on the items

Â only in s? So before I explain the correct answer

Â which is the third one, let me talk a little bit about a very natural but

Â incorrect answer, namely the second one. Indeed, the second answer seems to have

Â the best correspondence to the optimal substructure lemma.

Â The optimal substructure lemma states that the optimal solution must be

Â composed of an optimal solution on some prefix and an optimal solution on some

Â suffix, united under a common root r. So we definitely care about the solutions

Â to all prefixes and suffixes of the items but we care about more than just that.

Â So maybe the easiest way to see that is to think about the recursive application

Â of the optimal substructure lemma. And again relevant subproblems at the end

Â of the day are going to correspond to all of the different distinct subproblems

Â that ever get solved over the entire trajectory of this recursive

Â implementation. So, I mean just think about one sort of

Â example path in the recursion tree, right?

Â So in the outermost level recursion you've got the whole item set, let's say

Â there's 100 items one through 100, you're going through and trying all

Â possibilities of the root. So at some point you're trying out root

Â number 23 to see how it does. You have to recurse once on items one

Â through 22 to optimally build a search tree for them, and similarly for items 24

Â through 100. Now let's sort of drill down into this

Â first recursive call where you recurse on item just one through 22.

Â Now here again, you're going to be trying all possibilities of the route, those 22

Â choices. At some point you'll be trying route

Â number seventeen. There's again going to be two recursive

Â calls. And the second recursive call is going to

Â be on items eighteen through 22. It's going to be the items that were

Â passed through this recursive call. A prefix of the original items.

Â And then the second recursive call here, locally is going to be on some suffix of

Â that prefix. So in this case, the items eighteen

Â through 22. A suffix of the original prefix, one

Â through 22. So, in general, as you think through this

Â recursion multiple levels. At every step, what you've got going for

Â you is, you're either deleting a chunk of items from the beginning, a prefix.

Â Or you're deleting a chunk of items from the end.

Â But you might be interleaving these two operations.

Â So it is not true that you're always going to have a prefix of a suffix of the

Â original set of items. But.

Â What is true is that you will have some contiguous set of items.

Â It's going to be. If you, if you have I as your smallest

Â item in the subproblem and J is the biggest, you're going to have all of the

Â subproblems in between. And that's because you were only plucking

Â off items from the left or from the right.

Â So that's why C is the correct answer. You need more subproblems than just

Â prefixes and suffixes. Alright, so that was a little tricky,

Â identifying the relevant sub problems but now that we've got them in our grubby

Â little hands the dynamic programming algorithm as usual is just going to fall

Â in to place, the relevant collection of sub problems unlocks the power in a very

Â mechanical way of its entire paradigm. So let's now just fill in all the details.

Â The first step is to formalize the recurrence.

Â That is, the way in which the optimal solution of a given subproblem depends on

Â the value of smaller subproblems. This is just going to be a mathematical

Â formula which encodes what we already proved in the optional substructure

Â lemma. And then we're going to use this formula

Â to populate a table in a dynamic programming algorithm to solve,

Â systematically, the values for all of the subproblems.

Â So let's have some notation to put in our recurrence, in our formula.

Â We're going to be indexing sub-problems with two indices I and J and this is

Â because we have two degrees of freedom where the continuous interval of item

Â starts I, and where the continuous interval of items ends, J.

Â So for a given choice I and J, where of course I should be the most J.

Â I'm going to denote by capital C sub IJ, the weighted search cost of an optimal

Â binary search tree just in the contiguous set of in, items from I to J.

Â And of course, the weights or the probabilities are exactly the same as in

Â the original problem they're just inherited here, PI through PJ.

Â So now let's state the recurrence. So, for a given sub problem cij, we're

Â going to express the value of an optimal binary search tree in terms of those of

Â smaller sub problems. The optimal sub structure lemma tells us

Â how to do this. The optimal substructure lemma says, that

Â if we knew the roots, if we know the choice of the root r which here is going

Â to be somewhere between the items I and j, then in that case, the optimal

Â solution has to be composed of optimal solutions to the two smaller sub-problems

Â united under the root. Now we don't know what the root is.

Â There's a j - I + one possibilities. It could be anything between I and j

Â inclusive. So as usual, we're just going to do brute

Â force search over the relatively small set of candidates that we've identified.

Â So brute force search we encode by just explicitly having a minimum.

Â So I choose some route R somewhere between I and J inclusive.

Â And given a choice of R we're going to inherit the weighted search cost of the

Â optimal solution on just the prefix of items I through R minus one.

Â So on our notation that would be C of I. R minus one.

Â Similarly we pick up the weighted search cost of an optimal solution to the suffix

Â of items R plus one through J. And if you go back to our proof of the

Â optimal substructure lemma you'll see we did a calculation which gives us a

Â formula for what, how the weighted search cost of a tree depends on that of its

Â subtrees. And in addition to the weighted search

Â cost contributed by each of the two search trees we pick up a constant,

Â namely the sum of all of the probabilities in the items we're working

Â with. So here that's sum of.

Â P sub K, where K ranges from the first item in the sub problem I to the last

Â item in the sub problem J. So one extra edge case we should deal

Â with is if we choose the root to be the first item, then the first recursive term

Â doesn't make sense, then we'll have C, I, I minus one, which is not defined.

Â Similarly, if we choose the root to be J, then this last term would be C of J plus

Â 1J which is not defined. Remember indices are supposed to be in

Â order. So in that case, we'll just interpret

Â these capital C's as zero. And so why is the recurrence correct?

Â Well all of the heavy lifting was done and our proof of the optimal substructure

Â lemma. What did we prove there?

Â We proved the optimal solution has to be one of just J minus I plus one possible

Â things. It depends only on the choice of the

Â root. Given the root, the rest is determined

Â for us. The recurrency is by definition doing

Â brute force search through the only set of candidates.

Â So therefore, it is indeed a correct formula for the optimal solution value,

Â in terms of optimal solutions to smaller sub problems.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.