The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

Loading...

From the course by Stanford University

Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming

434 ratings

The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

From the lesson

Week 4

Advanced dynamic programming: the knapsack problem, sequence alignment, and optimal binary search trees.

- Tim RoughgardenProfessor

Computer Science

So now that we understand the way in which optimal solutions must be composed

Â of optimal solutions to smaller subproblems.

Â We're well positioned to turn that into a recurrence, and then a dynamic

Â programming solution for the knapsack problem.

Â To compile our discussion from last time into a recurrence, let me just introduce

Â a little notation. So by capital v sub I x, what I mean is

Â the maximum value you can get of a solution that satisfies two constraints.

Â First of all, it should make use only of the first I items.

Â Second of all, the total size of the items that you pick should be at most x.

Â The analogue of this notation in the independent set problem was G sub I, the

Â graph with the first I vertices. We're now using two indices instead of

Â one because sub-problems have two different ways in which they can get

Â smaller. Recall case two of our though

Â experiments, when we look at a smaller sub-problem there was both one less item

Â and there was also less capacity. So when thinking about a sub-problem we

Â need to keep track of both. We need to keep track of how many items

Â are we allowed to use and we need to keep track of what's the total size that we're

Â allowed to use. So let's express our conclusion to the

Â last video in this notation. What did we figure out last video?

Â We figured out that the optimal solution has to have one of two forms.

Â Either you just inherit the optimal solution from that tech instance with one

Â less prop, one less item, that was the first case, or you take the optimal

Â solution to the sub-problem which has one less item and less capacity by the weight

Â of the current item and you extend it to an optimal solution for the current

Â sub-problem using the ith item. So V sub I, X.

Â The optimal value for the current sub problem.

Â It's the better of the two possibilities, so we take a max.

Â The first possibility is just the optimal solution to the, with the same capacity

Â in one pure item, we're using that solution.

Â The second one is you commit to using item I, you gain value vi.

Â And you combine that with the optimal solution using the first I minus one

Â items in the reduced capacity of X minus the weight of the current item.

Â So one quick edge case. If the weight of the ith item is bigger

Â than our permitted capacity x, then of course we can't use it, and then we're

Â forced to use the first case. We're forced to be in case one.

Â This then is our recurrence for the knapsack problem.

Â So that completes step one where we think about the optimal solution structure and

Â use that to devise a recurrence. Now, in the step two, we're going to

Â precisely nail down what exactly are the subproblems we have to care about.

Â Well in our maximum waited independence set example on path graphs remember the

Â reasoning every time we recursed every time we needed to look up a, a sub

Â problem solution it was obtained by plucking verticies off of the right of

Â the graph and for that reason all we ever cared about was the various prefixes of

Â the graph. In one dimension we got something similar

Â going on here whenever we look at smaller sub problems it's always with one fewer

Â item we're always sort of deleting the last item before we do our look up.

Â So again, we need to worry about all possible prefixes of the items.

Â So sub-problems, of the first I items for all values of I.

Â For the knapsack problem however there's a second sense in which sub-problems can

Â be smaller. And remember case two of our thought

Â experiment, when we want to know the optimal solution

Â that's guaranteed to use the current item I.

Â We have to reduce the capacity before looking up the corresponding optimal

Â solution of a sub-problem. That is we're not just peeling off items,

Â but we're also sort of peeling off parts of the knapsack capacity.

Â Now, here is where we're going to use our assumption, that I told you at the

Â beginning, at our input, that sizes are all integers, and that the

Â knapsack capacity is an integer. So, the knapsack capacity starts at

Â capital W. Every time we peel off some integer

Â amount of capacity from it. So at all sub problems, we're going to be

Â working with integral knapsack capacities.

Â So therefore, in the worst case, you know,

Â what are the various sub problems that might arise?

Â Well, perhaps, each choice of a residual capacity, 012, all the way up to the

Â original knapsack capacity, capital W. So now we're in great shape.

Â In step two, we figured out exactly what subproblems we care about.

Â In step one, we figured out a formula by which we can solve bigger subproblems

Â given the solutions of smaller ones, so all that remains now is to write down a

Â table and fill it in systematically using the recurrence, beginning with the

Â smaller subproblems, ending up with the largest subproblems.

Â So here's this pseudo-code. It's again going to be very simple.

Â In this algorithm, the array A that we're going to be filling in has two

Â dimensions, in contrast to one dimension in the wave independence set problem.

Â That's because our sub-problems have two different indices.

Â We have to keep track of both of which set of items we're allowed to use, and

Â also what capacity we need to respect. So the two independent varables indexing

Â sub problems forces us to have a 2D array that we're going to go through now in a

Â double four loop. So in the init-, initialization step, we

Â fill in all the entries, where I equals zero,

Â where there's no items there, of course, the optimal solution value is zero, and

Â then we just go through the sub problem systematically.

Â When we get to a sub-problem we use the recurrence that we developed to compute

Â the entry in that table using, of course, the entries that we've previously

Â computed in earlier iterations. So for a given sub-problem where you're

Â allowed to use the first I items and the residual capacity is X.

Â What do we know from my thought experiment?

Â This can only be one of two things. We're just going to do brute force search

Â through the two possibilities. The two possibilities where we have case

Â one where we just inherit the optimal solution with one fewer item,

Â and the same capacity. Or if we're going to actually use item I,

Â then we compose that with the optimal solution from the first I items.

Â That leaves enough space for item I. And in that case, we get the value V sub

Â I for including this Ith item. So this code doesn't quite make sense in

Â the case where the weight of the current item, wy, is strictly bigger than the,

Â the capacity, x. that would give us a negative array

Â index, which, of course, is a no-no. But in that

Â case, you know, amongst friends we just interpret it as ignoring the second case

Â of the max. So, you know?

Â I'm not going to write down the extra code.

Â But what you would write down is. If WI is strictly bigger than x, then you

Â just define a of I, x to be equal to a of ai-1, x.

Â And one crucial point, is that, you know, by the time we need to solve subproblem

Â with a given I and a given X, we have at our disposal the solutions to all of the

Â subproblems that we need. In particular, in the previous iteration

Â of the outer for loop, we computed the solutions to the two relevant subproblems

Â for evaluating this recurrence. They're there waiting for us available

Â for constant time lookup. So after this double four loop completes,

Â sitting there waiting for us in A, in the entry N comma capital W, is the solution

Â that we wanted. That's the max value solution that can

Â use any items that it wants and that can use the entire original knapsack capacity

Â capital W. That's what we want to return.

Â So that's it. That's the whole dynamic programming

Â solution for the knapsack problem. Just to make sure this makes sense, in

Â the next quiz, I want to ask you to analyze the running time of this dynamic

Â programming algorithm. Alright, so the correct answer is the

Â second one. And the running time analysis is as

Â straightforward as it was for the max weight independent set problem.

Â We just count up the number of sub-problems and look at how much work we

Â do in each. So how many sub-problems are there?

Â Where they're indexed by both I and x, there are n plus one choices for I, there

Â are capital w plus one choices for x. So that gives us theta of n times w

Â different sub-problems. And all we do for each is a single

Â comparison amongst previously computed solutions.

Â So we just do constant work per sub-problem,

Â giving us the overall running time of n times capital W.

Â So, a couple other quick notes, which play out in exactly the same way as for

Â the Maximum Independence Set problem. So I'm not going to discuss the details

Â here. So first of all, correctness.

Â Again, it's exactly the same template as in the previous dynamic programming

Â algorithm, and in our divide and conquer algorithms.

Â You just go by induction on the problem size.

Â And, you use the case analysis or a thought experiment to justify, formally,

Â the inductive step. So this algorithm suffers from a similar

Â drawback to the max independent set algorithm for path graphs that we looked

Â at. Namely it computes the value of an

Â optimal solution not the optimal solution itself.

Â It returns a number, not an actual subset of items.

Â But, just like with the independence set problem, you can reconstruct an optimal

Â solution from the filled in array just by tracing backward.

Â So the intuition is that you begin with the biggest sub-problem, and you look at

Â which of the two cases was used to fill in that entry.

Â If case one was used, you know you should exclude the last item.

Â If case two was used, you know you should include the last item.

Â And also, which of those two cases tells you which sub-problem you should trace

Â back to, to continue the process. So I encourage you to spell out the

Â details of this reconstruction algorithm in the privacy of your own home.

Â That'll give you some nice practice with this reconstruction aspect of the dynamic

Â programming paradigm.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.