The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

Loading...

From the course by Stanford University

Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming

433 ratings

The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

From the lesson

Week 1

Two motivating applications; selected review; introduction to greedy algorithms; a scheduling application; Prim's MST algorithm.

- Tim RoughgardenProfessor

Computer Science

In this video we'll cover a second problem to whet your appetite for things

Â to come, namely the problem of sequence alignment.

Â So this is a fundamental problem in computational genomics.

Â If you take a class on the subject it's very likely to occupy the very first

Â couple of lectures. So in this problem you're given two

Â strings over an alphabet and no prizes for guessing which is the alphabet we're

Â most likely to care about. Typically, these strings represent

Â portions of one or more genomes. And just as a toy running example you can

Â just imagine that the two strings were given are A, G, G, G, C, T and A,

Â G, G, C, A. Know that the two input strings do not

Â necessarily need to be of the same length.

Â And informally speaking, the goal of this sequence alignment problem is to figure

Â out how similar the two input strings are.

Â Obviously, I haven't told you what I mean by two strings being similar.

Â That's something we'll develop over the next couple of slides.

Â Why might you want to solve this problem? Well, there's actually a lot of reasons.

Â Let me just give you two of many examples.

Â What will be the conjecture or the function of regions of a genome that you

Â don't understand, lets say the human genome,

Â from similar regions that exist in genomes that you do understand or at

Â least understand better, say the mouse genome.

Â If you see a string that has a known function in the well understood genome

Â and you see something similar in the poorly understood genome, you might

Â conjecture it has the same or similar function.

Â A totally different reason you might want to compare the genomes of two different

Â species, is to figure out whether one evolved directly from the other and when.

Â A second totally different reason you might want to compare the genomes of two

Â different species is to understand their evolutionary relationship.

Â So for example, maybe you have three species A, B, and C, and you're wondering

Â whether B evolved from A and then C evolved from B, or whether B and C

Â evolved independently from a common ancestor, A. And you might then take

Â genome similarity as a measure of proximity in the evolutionary tree.

Â So having motivated the informal version of the problem, let's work toward making

Â it more formal. In particular, I owe you a discussion of

Â what I mean by two strings being similar. So to develop intuition for this, let's

Â revisit the two strings that we introduced on the previous slide A, G, G,

Â G, C, T, and A, G, G, C, A.

Â Now, if we just sort of eyeball these two

Â strings, I mean clearly they're not the same string.

Â But, we somehow feel like they're more similar than they are different.

Â So, where does that intuition come from? Well, one way to make it more precise is

Â to notice that these two strings can be nicely aligned in the following sense.

Â Lets write down the longer string, A, G, G, G, C, T.

Â And, I'm going to write the shorter string under it, and I'll insert a gap, a

Â space to make the two strings have the same length.

Â I'm going to put the space where there seems to be quote unquote a missing G.

Â And then, what sense is this a nice alignment, well, it's clearly not

Â perfect. We don't' get a character, by character

Â match of the two strings, but there's only two minor flaws.

Â So on the one hand, we did have to insert a gap and we do have to suffer one

Â mismatch in the final column. So this institution motivates defining

Â similarity between two strings with respect to their highest quality

Â alignment, their nicest alignment. So we're getting closer to a formal

Â problem statement, but it's still somewhat underdetermined.

Â Specifically, we need to make precise why we might compare, why we might prefer one

Â alignment over another. For example, is it better to have three

Â gaps and no mismatches or is it better to have one gap and one mismatch?

Â So if in this video, we're effectively going to punt on this question. We're

Â going to assume this problem's already been solved experimentally, that it's

Â known and provided this part of the input which is more costly, gaps and various

Â types of mismatches. So here, then, is the formal problem

Â statement. So, in addition to the two strings over

Â A, C, G, T, we are provided as part of the

Â input, a non-negative number indicating the cost we incurred in alignment for

Â each gap that we insert. Similarly, for each possible mismatch of

Â two characters, like, for example, mismatching an A and

Â T. We're given as part of the input a

Â corresponding penalty. Given this input, the responsibility of a

Â sequence alignment algorithm is to output the alignment that minimizes the sum of

Â the penalties. Another way to think of this output, the

Â minimum penalty allignment is, we're trying to find in affect the minimum cost

Â explanation for how one of these strings would've turned into the other.

Â So we can think of a gap as sort of undoing a deletion that occurred some

Â time in the past and we can think of a mismatch as representing a mutation.

Â So this minimum possible total penalty, that is these values of this optimal

Â alignment is famous and fundamental enough to have its own name namely the

Â Needleman-Wunsch score. So this quantity is named after the two

Â authors that proposed efficient algorithm for computing of the optimal alignment.

Â that appeared way back in 1970, in the Journal of Molecular Biology.

Â And now, at last, we have a formal definition of what it means for two

Â strings to be similar. It means they have a small NW score, a

Â score close to 0. So for example, if you have, if you have

Â a database with a whole bunch of genome fragments,

Â according to this, you're going to define the most similar fragments to be those

Â with the smallest NW score. So, to bring the discussion back squarely

Â into the land of algorithms, let me point out that this definition of genome sum,

Â similarity is intrinsically algorithmic. This definition would be totally useless,

Â unless there existed in efficient algorithm that given two strings and its

Â penalties computes the best alignment between those two strings.

Â If you couldn't compute the score, you would never use it as a measure of

Â similarity. So this observation puts us under a lot

Â of pressure to devise an efficient algorithm for finding the best alignment.

Â So how are we going to do that? Well, we can always fall back to

Â brute-force search, where we iterate over all of the conceivable alignments of the

Â two strings, compute the total penalty of each of those alignments, and remember

Â the best one. Clearly, correctness is not going to be

Â an issue for brute-force search. It's correct essentially by definition.

Â The issue is how long does it take? So let's ask a simpler question.

Â Let's just think about, how many different alignments there are?

Â How many possibilities do we have to try? So if [INAUDIBLE] let's imagine, I gave

Â you two strings of length 500, which is a knot of a reasonable length.

Â Which of the following english phrases best describes the number of

Â possibilities, the number of alignments given to strings

Â with 500 characters each? So I realize this is sort of a cheeky

Â question, but I hope you can gather that what I was

Â looking for was part D. So you know?

Â So, how big are each of these quantities, anyways?

Â Well, in a, in a typical version of this class, you might have about 50,000

Â students enrolled or so. So that's somewhere between 10^44 and

Â 10^5.5. The number of people on earth is roughly

Â 7,000.000.000. So that's somewhere between 10^9 and

Â 10^10/10. The most common estimate I see for the

Â number of atoms in the known universe is 10^80.

Â And believe it or not, the number of possible alignments of two strings of

Â length 500 is even bigger than that. So I'll leave it for you to convince

Â yourself that the number of possibilities is at least two raised to the 500.

Â the real number is actually noticeably bigger than that. and because 10 is at

Â most 2^4, we can lower bound this number by 10^125 quite a bit bigger than the

Â number of atoms in the universe. And the point of course, is just that

Â it's utterly absurd to envision implementing brute-force search even at a

Â scale of a few hundred characters. And you know, forgetting about these sort

Â of astronomical, if you will, comparisons even if you had string lengths much

Â smaller, say in the you know, a dozen or two, you'd never ever run brute-force or

Â this is not going to work. And of course, notice this is not the kind of problem

Â that's [INAUDIBLE] This just doesn't go away if you wait a little while for

Â Moore's law to help you. This is a fundamental limitation. It

Â says, you are never going to compute alignments

Â of the strings that you care about, unless you have a fast,

Â clever algorithm. .

Â I'm happy to report that you will indeed learn such a fast and clever algorithm

Â later on in this course. Even better, it's just going to be a

Â straightforward instantiation of a much more general algorithm design paradigm.

Â That of dynamic programming.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.