In this video, we'll develop our second dynamic programming solution for the

knapsack problem. Why do we need a second algorithm? Well,

it's a crucial building block in our heuristic that achieves arbitrarily close

approximation to the knapsack problem in polynomial time.

Recall that in the knapsack problem, the input is 2n+1 numbers.

There's n items, each one has a positive value, Vi, and a positive size, Wi.

Additionally, we're given a knapsack capacity,

capital W. Back in the dynamic programming section

of this course, we already devised an algorithm for this problem.

That algorithm assumed that the item sizes were integral and also that the

knapsack capacity is an integer. The running time of that dynamic

programming algorithm was O(n), the number of items, times capital W, the

knapsack capacity. In particular, we get a polynomial time

algorithm for the special case of knapsack when the knapsack capacity is

not too big, when it's polynomial in n.

It turns out that this is not the right special case on which to build an

algorithm which has arbitrarily good approximation in polynomial time.

Rather, for that kind of heuristic, what we really want is a polynomial time

solution for the special case where item values are small, that is precisely the

solution that we're going to provide in this video.

We will be assuming that the integer values are all integers.

The sizes of the knapsack capacity can be whatever and the dynamic program solution

that we developed will have running time. Big O of n^2 times Vmax.

Vmax here denotes the maximum value of any item.

The running times of the two algorithms are more in parallel than the notation

might suggest. Probably, it looks like there's an extra

factor of n in the second dynamic programming algorithm,

but that's the wrong way to think. Think, for the first algorithm, think of

w as an upper bound on the largest total size that a feasible solution might

possess. In the second algorithm, think of n times

Vmax as an upper bound on the maximum total value that any feasible solution

might possess. In the first algorithm, we're focused on

size, in the second algorithm, we're focused on

value, and in any other case, the running time

is the maximum quantity that we have to worry about times the number of items, n.

Now, since all of you are survivors of my dynamic programming boot camp, I don't

feel the need to develop this solution in great detail, let me just jump to the

relevant subproblems. So the subproblems here will be a twist

on the ones that we used in our first dynamic programming algorithm for

knapsack. The part that's going to be exactly the

same is we're going to have an index i, i specifies which prefix of items we're

permitted to use in a given subproblem. Now, in general, in the knapsack problem,

you're simultaneously trying to get the value high while keeping the weight

small. So the way we organized things in our

first dynamic programming algorithm is the second index into a subproblem

specified the residual capacity x that we were allowed to use,

and then, while respecting that capacity x, we asked for the largest value that we

could obtain just using the first I, items.

Here, we're going to turn that on its head.

We're going to have a second parameter x, which is the value that we're striving

for. So we want to get value x or more and we're going to try to minimize the

total size that we need to use subjects to getting the value at least x.

So the first index, i, arranges as usual from 0 to n,

this corresponds to all the prefixes of items you might be interested in.

The second parameter, x, ranges over all of the targets for total value that you

might be interested in. Now, remember, we're assuming in this

video, that every item value is an integer,

so that means any subset of items will have an integral total value so we only

need to worry about integer values of x. In addition, we never need to worry about

trying to achieve a total value bigger than n, the number of items,

times v max, the largest value of any item.

In fact, you could replace this upward value n times Vmax by the sub of the vi's

if you prefer. So if we're given choice of i and x,

[COUGH] I'm going to use capital S sub i,x to denote the optimal value of that

corresponding subproblem. So, that is, the minimum weight, the

minimum total size that is sufficient to achieve a total value of x or more while

being restricted to using just the first i items.

If there is in fact no way to get the target value of x using just the first i

items, that is if the sum of the values of these

items is already less than x, we just define this to be plus infinity.

So let's write down the natural occurrence.

Let's assume here that i is at least 1. So, the structure here is going to be

exactly the same as it was in our first dynamic programming algorithm for the

knapsack problem. Specifically, let's focus on a subproblem

with a choice of i and a choice of x. That final item i is either in this

optimal solution for the subproblem or it's not.

That gives us two cases and the recurrence is just going to do

brute-force search over the two case, because we're trying to minimize the

total size required to achieve a given value target, here, the brute-force

search is going to take the form of a minimum.

What are the two candidates for an optimal solution? Well, the boring one is

when you don't even bother to use item i, then of course, you just inherit an

optimal solution, a minimum sized solution using just the first i-1 items

achieving the same value target x. On the other hand, suppose that in an

optimal soluton to the subproblem, you do actually use item i.

Well, this contributes to the size of item i to the weight of this optimal

solution. So that's going to be w sub i.

What can we say about the rest of the items in this optimal solution? Well,

because the solution here achieves a value target of x,

the items in this solution other than i, must be achieving by themselves a value

target of x minus v sub i. And, of course, by the usual cut and

paste argument amongst all subsets of the first i-1 items with that property, they

better will have the minimum total weight.

So one quick edge case in this second case, if vi is actually bigger than x and

we have a negative index here, we just interpret this quantity as 0,

that's because item i alone meets the value target of x.

Let's now wrap things up with the pseudocode of the new dynamic programming

algorithm. So A will be our usual table.

It has two dimensions, because subproblems are indexed by two

parameters, i, the prefix that ranges from 0 to n, and x, the value target that

ranges from 0 to, to the maximum imaginable value,

let's say n times Vax. So the base case is when i equal 0, that

is, you're not allowed to use any items. In this case, we have to fill it in with

plus infinity. Well, except if x itself is 0, then the

answer is 0. Now, we just populate the table using the

recurrence in a double for loop. The structure here is exactly the same as

in our first knapsack dynamic programming algorithm.

In the first dynamic programming solution that we developed for knapsack, we could

return the correct answer in constant time given the filled-in table.

That's because of one of the subproblems in that dynamic programming algorithm,

literally was the original problem. When i=n and x is equal to the full map

sack capacity capital W, the responsibility of that subproblem was to

compute the max value solution subject to the capacity capital W using all of the

items, that's literally the original problem.

By contrast, in this dynamic programming algorithm, none of the subproblems are

literally the original problem that we wanted to solve.

In this new dynamic programming algorithm, however, none of the

subproblems correspond directly to the original problem that we wanted to solve,

none of them tell you the maximum value of a feasible solution.

To see why, let's inspect the largest batch of subproblems.

When i is equal to n and you can use whatever subset of items that you want.

The second index of one of these problems is a target value x.

That might be a number, like say 17,231. So after you've run this algorithm and

you've filled in the whole table, what do you know in this subproblem? You will

have computed the smallest total size of a bunch, bunch of items that has total

value at least 17,231. So that's going to be some number, maybe

it's 11,298. But, what if your knapsack capacity's

only 10,000? Then, this is not a feasible solution, so it doesn't do you any good.

Okay, well that's not quite true, it does do you some good.

If you know that every single solution that gets value 17,231 or more has size

bigger than your knapsack capacity, well then, you know that the optimal solution

has to be less than 17,231. There is no feasible way to get a total

value that high. Now, you realize that if you knew this

information for every single target value x, and you do once you've filled in the

entire table, that's sufficient to figure out the optimal solution,

figure out the biggest value of a feasible solution.

All you need to do is scan through this final batch of subproblems.

You begin with the largest conceivable target value x, and then you get less and

less ambitious as you keep finding infeasible solutions.

So you scan from high target values to low target values and you're looking for

the first that is the largest target value x, such that there exists a subset

of items meeting that target value whose total size is at most your knapsack

capacity. That is going to be the optimal solution,

that first target value x, that can be physically met given your knapsack is at

capacity. Analyzing the running time is

straightforward, so how many subproblems are there, or equivalently, how many

iterations of a double for loop do we have? Well, it's n outer iterations, then

n times Vmax inner iterations, so that's n^2 Vmax overall.

Our brute-force searches over only two candidates, so we only do constant work

per iteration of the double for loop. In the final line, we do a scan through

the largest batch of subproblems, so that's O of n times Vmax.

So the running time is dominated just by the constant work per subproblem, and so,

our overall running time is O of n squared times Vmax, as promised.