0:09

Usually when designing a dynamic program and algorithm, you start

Â with analyzing the structure of an optimal solution for your computational problem.

Â You do this to come up with the right

Â definition of a sub-problem that will allow you to express the solution for

Â a sub-problem through solutions for smaller sub-sub-problems.

Â So, when you write down this recurrence relation you can actually transform it

Â to an ISA alternative algorithm or a recursive algorithm.

Â The corresponding i 20 algorithm just solves all sub-problems,

Â going from smaller ones to larger ones.

Â And for this reason it is also sometimes called a bottom up algorithm.

Â On the other hand, the recursive algorithm to solve a sub-problem

Â makes recursive calls to smaller sub-sub-problems.

Â And for this reason it is sometimes called the top down approach.

Â Well if you implement a recursive algorithms

Â straightforwardly it might turn out to be very slow because it will recompute

Â 1:15

some radius many, many, many times.

Â Like with three-dimensional numbers for example.

Â However, there is a simple trick, and it is called memorization,

Â that allows you to avoid re-computing many times the same thing.

Â Namely, you can do the following, when solving sub-problems,

Â right after solving it you store its solution into a table, for example.

Â 1:41

And when you make a recursive call to solve some sub-problem, before

Â trying to solve it, you check in a table whether its solution is already stored.

Â And if its solution is already in the table which means that it was

Â already computed then you just return it immediately.

Â So this recursive call, turns out to be just a table look up.

Â So this is how a recursive algorithm with memoization works.

Â Let's see how a recursive algorithm with memoization for

Â the Knapsack problem looks like.

Â For simplicity let's assume that we're talking about the Knapsack

Â we use repetitions.

Â In this case, we need to compute our sub-problem for a Knapsack of size w,

Â is just the optimal rate of a Knapsack of total weight w.

Â So we computed as follows, we computed by recursive procedure.

Â First of all, we check whether its solution is already in a hash table.

Â We use hash table to store pairs of objects.

Â So, for weight w, we store value of w if it is already computed.

Â If it is already in the table, we return it immediately, otherwise we just

Â compute it and we make recursive calls to compute the values for

Â the sub-problem on w minus wi, okay?

Â And when the value is computed, we just store it in our hash table.

Â So this way, we use memoization by storing this in the hash table

Â to avoid recomputing the same thing once again.

Â So once again, an iterative algorithm solves all sub-problems

Â going from smaller ones to larger ones, right?

Â And eventually solves the initial problem.

Â On the other hand the recursive algorithm goes as follows.

Â So it stars from the initial problem and

Â it makes recursive calls to smaller sub-sub-problems, right?

Â So in some sense an iterative algorithm and the recursive algorithm are doing

Â the same job, especially if we need to solve just old range of sub-problems.

Â However, a recursive algorithm might turn to be slightly slower because

Â 4:12

put the return address on stamp, for example.

Â So, the recursive algorithm has some overhead.

Â There are however cases when you do not need to solve all the sub-problems and

Â the Knapsack problem is nice illustration of this situation.

Â So, imagine that we are given an input to the Knapsack problem where all

Â the weight of n items together with total weight of the Knapsack

Â are divisible by 100, for example.

Â 4:41

This means that we are actually not interested in sub-problems

Â where the weight of the knapsack is not divisible by 100, why is that?

Â Well just because for any subset of items since all the weight

Â of items is divisible by 100 their total weight is also divisible by 100.

Â So in this case an iterative algorithm still will solve just

Â whole range of sub-problems.

Â While a recursive algorithm will make only those recursive calls

Â that I actually needed to compute the final solution.

Â So, it will make only recursive course through sub-problems

Â whose weight are divisible by 100.

Â The final remark of this lesson is about the running time.

Â So if you remember the running time of words that we recently designed in this

Â lesson was the log of n multiplied by w.

Â And this running time looks like polynomial, however it is not.

Â And this is why, so consider for example, the following input.

Â 5:48

I mean, I assume that the total weight of the knapsack is as shown on this slide.

Â This is a very huge number, roughly ten to the 20,

Â I mean 20 digits of decimal representation.

Â At the same time, the input size is really tiny, just 20 digits, right?

Â So this is not gigabytes of data, just 20 digits but on this input

Â already our algorithm will need to perform roughly ten to the 20 operations.

Â This is really huge, for example we can't do this on our laptops,

Â and this is because to represent the value of W, we only need log W digits.

Â 6:40

Okay, and if you represent the running time in terms of n and log W,

Â then you get the following expression, n multiplied by 2 to the log W,

Â which means that our algorithm is in fact exponential time algorithm.

Â 7:09

Okay, and in fact, we believe that it is very difficult to construct an algorithm

Â that will solve this problem in polynomial time, in truly polynomial time.

Â In particular, we will learn later in this presentation that this problem

Â is considered to be so difficult that for solving the Knapsack problem for

Â example, in polynomial time, one gets $1 million.

Â