0:33

Before designing an algorithm, let's just try to understand whether we can use

the subproblems used in the previous version.

So assume that we can see there are some optimal solution for

a knapsack of capacity W, and assume that we somehow

know that it uses the last item of weight, wn- 1.

And we try to extract this item out of the solution and

to solve the remaining problem of capacity, W- wn- 1.

So in this case, it is not excluded that the optimal solution for

the results in subproblem of capacity, W- wn- 1, actually contains the last item.

And since it contains it,

we cannot add this item to this solution together the solution for

our original subproblem because it will create two copies of the last item, right?

However, we consider it the last, namely, n- 1 item here on purpose because

this will eventually lead us to defining the right notion of the subproblem.

Namely, assume that the n- first item is used in an optimal solution.

Then if we take this out from the bag of capacity capital W,

then what we get must be an optimal solution for

the bag of total capacity W- wn- 1 that

uses only items from 0 to n- 2, right?

Because we are not allowed to use the last item anymore.

3:24

And also, when u = 0, this actually means that we have the bag of total capacity 0.

So value (0, i) = 0 for any i.

And the recurrence relation that we get here is actually even simpler than in

the previous case.

So we either use the item i- 1 in our solution or not.

So what we need to compute in order to find the value (u,

i) is actually the maximum of the following two values.

The value (u- wi- 1, i- 1) + vi- 1,

or just value (u, i- 1).

So in the first case,

actually, we are going to use the i- first item in our solution.

And in the second case, we are not going to use this item.

And also, among these two terms, the first one is used only in case

the weight of the i- first item does not exceed u, okay?

So this recurrence relation, as usual,

can be easily transformed into a recursive algorithm that you see now on the slide.

As usual, we use a table T which is, in this case, just a dictionary.

And we only start computing the value (u, i) if it is not yet in the table.

And we do this just following our recurrence relation.

So namely, in the eighth line,

we actually compute the second term in our recurrence relation.

So we initialize T (u, i) to be equal to just actually the value (ui- 1).

And then, in the case when the value of the i- 1 item does not exceed u,

we check whether the current value can be improved

by actually taking an optimal solution for u- wi- 1 and

adding this i- first item of this solution, okay?

So this gives us a recursive algorithm.

And as usual, it can be transformed into an iterative algorithm.

In this case, instead of using dictionary for

storing all the intermediate solutions, we actually use a two dimensional table.

So we use a two dimensional table because each our

subproblem is actually specified by two parameters, by u and i.

So for this reason, we need a table which is indexed, so

rows are indexed by u, for example, and columns are indexed by y, okay?

Then we just fill in this table by gradually

increasing the parameters u and i.

This gives us an algorithm whose running time is equal to big O(nW).

The reason is simple.

So in this case, we just have two nested loops, two for nested loops.

Or put it otherwise, we have nW subproblems, and for

each subproblem, it takes us a constant amount of work to compute its value.

The space requirement is the same, it is big O(nW),

just in particular, because we need to store a table of that size, right?

At the same time, it is not so difficult to improve

actually the space used by this algorithm by noting that

we do not need to store the whole table all the time.

Namely, when computing the values in the current column,

it is enough to know all the values from the previous column.

So instead of storing the whole table,

we can actually keep storing the current row and the previous row, okay?

Then let's also discuss how to reconstruct the solution, namely,

what we've already computed is the optimal value.

Now, what we are going to do is to explain how to find

the optimal subset of items, right?

As usually, this can be done just by reconstructing the solution

somehow from right to left or just to unwind the solution from right to left.

So we start just from the last cell of our table,

from the cell where u = W and i = n.

And then we follow the following simple rule.

So each time if value (u, i) = value (u,

i- 1), then this basically means that our

optimal solution does not use the item i- 1.

So in this case, we just update i to i- 1 and proceed.

In the opposite case, when value (u,

i) = value (u- wi- 1, i- 1) + vi,

we actually know that the i's item is used, okay?

So we store this information somehow and

we update u to u- wi- 1 and i to i- 1, okay?

So this eventually leads us somehow from the bottom right

corner of our table to the top left column in our table, okay?

Now, as a final remark,

let me also explain how it is possible to arrive to this

dynamic programming solution by optimizing the corresponding brute force solution.

So what is a brute force solution in this case?

Since we have only a single copy of each item, we can do the following.

We process items one by one, starting from item number 0 and

ending with item number n- 1.

And for each item, we decide whether we take it in our solution or not.

So what you see here is a recursive implementation of this idea.

As you see, we have two parameters here, items and last.

So items, just as in our previous brute force solution,

denotes that set of currently used items.

And the last is actually the index of the last item which we consider it, okay?

So in the seventh line in this code, we actually

just decide not to use the item with index last.

So actually, we just increase last by 1 and make a recursive call, okay?

In the eigth line, if the item with index last + 1

actually can be added to our current solution, namely,

if adding this item does not exceed the weight, capital W,

then we try to add it to our current set of items.

We increase last by 1, we make the recursive call.

And if it gives a better solution, then we actually store this result.

So then by running this algorithm,

you will find a solution for small datasets, but for

large datasets, it will not work, of course, because it is too slow.

It actually goes through all possible subsets of size 2 to the n.

However, it can be optimized as usual.

So it can be optimized just by noticing that the only thing we need to

know about the current subset of items is actually

the index of the last item that we made some solution about, and

also the total weight of the current set of items.

So just by replacing the collection of items just by their total weight,

we actually get back our previous solution based

on the dynamic programming technique.