0:00

So now we talked about dynamic programming, and we showed how it,

Â we can use it to solve the problem, the and the restructure problem efficiently.

Â And we said that it gives us an advantage over recursive algorithm.

Â In, algorithms, in terms of,

Â of saving us computing solutions to subproblems that we had already computed.

Â With dynamic programming, we store these values.

Â With a recursive algorithm, a recursive algorithm, when it sees a, a new,

Â when we make a new call to a function,

Â when an instance of a problem, that algorithm doesn't know whether it,

Â that, the, the, the function has been already called in that instance or not.

Â So it will keep doing that computation.

Â I want to illustrate the difference between the two.

Â On, on a, a case or

Â on a computation that, that involves the, the number of choices of subsets.

Â Remember that we talked about, about this this notation here, N choose K.

Â Right?

Â N choose K is the number, this notation says the number

Â of ways of choosing a subset of size K from a set of size N.

Â We can think about this recursively,

Â in the sense that suppose I have a set of N elements.

Â And I am built, I'm taking a set of size K from this, a set from size K.

Â And now think about the element N, okay?

Â Think about the element N.

Â N, when I'm building a subset, if I'm building a subset here.

Â 1:37

N is not here and N is in the subset, okay?

Â [COUGH] If I build a, I want to continue building the subset if N is here now I

Â need to add K minus one element to here from the set of elements 1 to n minus 1.

Â Right?

Â So now I add K minus 1 element from the set of 1 to N minus 1.

Â If, if I don't want to put n in the subset, okay?

Â Now I need to choose K element from the set of 1 to N minus 1, okay?

Â So this is how we would recursively compute the set of all subsets of

Â size K for example, from a set of N elements.

Â For each element, either I put it in the subset or I didn't put it in the subset.

Â If I put it, I repeat the process of choosing K

Â minus 1 element from the remaining element and there is no set.

Â If I don't put it, I still have to choose K element because of so

Â far I have chosen zero, but from the remaining set of elements.

Â So this one if, if I have, this is the number of choosing K elements from this,

Â what would be the number of choosing k minus 1 elements from this?

Â This has N minus 1, and I'm choosing K minus 1.

Â So it would be N minus 1, K minus 1.

Â This one here, I have N minus 1, but I'm choosing K.

Â 2:54

So the number of subsets, it's, the number of subsets I can form like this,

Â plus the number of subsets I can form like this.

Â And this is why we have this very pop, common or, or famous relationship

Â that N choose K equals N minus 1 choose K plus N minus 1 choose K minus 1.

Â This corresponds to the cases where K was not the, the element N is not or

Â the last one, the last element we are looking at is not part of the subset.

Â This correspond to cases where the last element is part of the subset.

Â So we have this relationship, now suppose I want to,

Â I write, I, a recursive algorithm to compute this.

Â Now, of course we have the base cases that if I have N, and

Â I choose the other elements for any N greater than or equal to zero.

Â Choosing zero elements from N there's one option,

Â and 0K is zero for any K greater than zero.

Â Okay?

Â So we have these two base cases.

Â Now I can turn this into you know,

Â compute N choose K where I get NK here.

Â 6:32

And 9, choose 4.

Â So it's calling the function, plus calling the function on 9 and 5, 9, 9 and 4.

Â Then when it comes here and says compute 9, 5, it's going to say,

Â okay, it is compute 8, 5 and 8, 4.

Â Here it is 8,4, 8,3.

Â Then it's going to continue.

Â 7,5, 7,4, 7,4, 7,3, 7,4, 7,3, 7,3, 7, 2.

Â So each one of these here, when the parentheses

Â is really a call to this function.

Â So the function is being called over and over and over, so when we call it on

Â ten five, it's, it mean two codes to the function of these parameters, 9,5, 9,4.

Â When it called this, it called the function twice with eight comma five,

Â eight comma four.

Â And this again, 6,5, 6,4, 6,4,

Â 6,3, 6,4, 6,3 6,3,

Â 6,2, 6,4, 6,3.

Â Here.

Â 6,3, 6,2, 6,3, 6,2, 6,2,

Â 6,1 and it continues because it hasn't

Â reached any of these case now what is happening

Â here let's look at 6,4, 6,4, 6,4.

Â Okay?

Â And we have [SOUND] 6,3, 6,3,

Â 6,3, 6,3, 6,3, 6,3.

Â So this function, this recursive algorithm is going to call itself.

Â Six times on the, on the same parameter.

Â 6,3, 6,3, 6,3, 6,3, 6,3.

Â Okay? Because the recursive algorithm,

Â when it reaches this call here and

Â says okay, now I need the value on 6,3, it's going to call the function.

Â It doesn't know that somehow, somewhere else it had already computed it.

Â And this.

Â Example illustrates the problem, in this case, with recursive algorithms.

Â We have these overlapping sub-problems.

Â This sub-problem, 6 choose 3, is overlapping.

Â It's here, and here, and here, and here, and here, and here.

Â What this means is that this path, and this path, and

Â this path, and this, and this one and this one, they are overlapping at this problem.

Â Recursive algorithm is not going to keep track of that.

Â It doesn't know that this overlap.

Â It's going to keep computing them.

Â Suppose that computing 6 choose 3.

Â Right?

Â The function is going to take million seconds here.

Â It's going to do a million, plus million, plus million, plus million, plus million,

Â plus million.

Â Dynamic programming algorithm knows this, keeps track of this.

Â It has one entry in the matrix because it's building it so this is,

Â if this is the K, it has three here.

Â If this is the N, it has six here.

Â It has this value.

Â It might take million steps to compute it.

Â But it will take less, by the way, because again there's still some over-lapping here

Â that is being exploiting.

Â It, even if it takes a million steps, next time it's need it,

Â it's better, the computer is going to take one step to look up that value,

Â one step to look up that value, one step to look up that value, and so on.

Â Okay?

Â So this is the major difference between dynamic programming and recursion.

Â When we have this notice that to have a dynamic programming algorithm,

Â I had to had a, to I had to have a recursive formula.

Â I had OPT of I, J equal max of OPT I,J minus 1 and so on.

Â When I have recursive formula the natural thing for

Â me to think about is let me implement it recursively.

Â Let me have a recursive algorithm.

Â But when we have this nat, this nature of

Â the problem that the subproblems are overlapping, recursive is, recursion and

Â recursive implementation is going to be costly because the, this subproblems.

Â Are going to be computed over and over and over.

Â Every time we see that's a problem,

Â the function is going to compute from scratch whereas dynamic programming

Â overcomes this problem by storing the values in in a matrix.

Â And in it's nature of going from smaller sub-problems to larger sub-problems so

Â that by the time is comes to 9,5 it had already computed these values here.

Â So when it comes to compute 9,5 all it needs to do is look up this value,

Â look up this value, and add the two values.

Â When it comes to compute 8,5 it gets this, it gets this, and adds them.

Â Here when, when recursive algorithm comes to compute 7,4,

Â it's going to compute 6,4 take all the time to compute this, then it

Â will take all the time to compute this, and then it will add the two numbers.

Â Dynamic programming, when it wants to compute this, it just looks up this value,

Â looks up this value, and adds these two.

Â 12:06

Okay?

Â So there is a lot in common between a recursive algorithm and

Â dynamic programming algorithm at their core.

Â There is some sort of recursive formula that corresponds to the optimization

Â criteria or the optimality criteria in our score, that we are trying to optimize.

Â The recursive algorithm is going to naturally implement that recursively,

Â keep calling the function every time, every time if we need the value.

Â Dynamic programming instead, is going to store the values of these the sub

Â problems and every time it sees the sub problem again, it just looks up the value.

Â Of course if there is no overlapping, if I draw this tree and

Â I'm not seeing the same value twice,

Â we are not going to gain anything from dynamic programming implementation because

Â the power of dynamic programming comes when we have overlapping sub problems.

Â Okay?

Â So this is a case that illustrates, why dynamic programming is a good

Â algorithmic technique where we store the values and we retrieve them when we need.

Â Whereas recursive algorithm is going to keep calling the function and

Â doing the computation from scratch, even though we have done it many times.

Â Okay?

Â So keep this in mind when you are thinking about recursive versus dynamic programming

Â algorithm and I highly, highly encourage you to implement the dynamic programming

Â for RNA secondary structure, and to, to implement a recursive version of that, and

Â use large sequence, like a sequence we have, let's say 500, or,

Â or 200 symbols in it, and run the two algorithms and see what you get.

Â Okay?

Â Again you will notice that recursive algorithm is

Â going to keep computing the same, solution to some, same sub problem over and

Â over whereas dynamic programming will never do that.

Â It will compute the solution to a subproblem once, and

Â then later it will just look it up in the matrix.

Â