0:17

So what kind of data structures are going to need first of all?

Â Our goal is to find the shortest path from s to every other vertex.

Â So the first observation is that there's going to be

Â a shortest-paths tree solution.

Â Well, if two paths have the same length,

Â then certainly it's going to be such a solution.

Â And there's a number of ways to convince yourself that there's going to be a tree.

Â If you've got two paths to the same vertex, you can delete the last edge on

Â one of them and keep going until all that's left as a tree for example.

Â So what we want to do is compute a tree.

Â Now, we've done that in several algorithms before and the reasonable way to

Â represent the shortest-paths tree is to use two vertex-indexed arrays.

Â 1:11

compute the length of the shortest path from s to that vertex.

Â So in this case, we have the shortest-paths tree and

Â we'll keep the length of the shortest path from the source 0 to each vertex.

Â So, 0-2, the length of that shortest path is 0.26.

Â So 0-2-7 is 0.60 and like that,

Â because you go from 0 to 2, 0.36.

Â And from 2 to 7, 0.34, you get 0.60 and so forth.

Â And then the other thing, as we've done before, is you use apparent link

Â representation where edgeTo[v] is going to be the last edge that takes us to v.

Â And by following back through that array,

Â we can get the path as we've done several times before.

Â If we want the path from 0 to 6, we go to edge to 6 and

Â says well, the last thing we do is 3 to 6.

Â Then we go to 3 and say that way, we got to 3 is from 7.

Â We got to 7 and say that way, we got to 7 is from 2.

Â We go to 2 and say that's the way we go to 0.

Â And if we put all those things on a stack and then we can return them as

Â notable to the client, and that gives the edges on the shortest path.

Â So that's the data structures that we're going to use for shortest paths.

Â [COUGH] This is actually the code that does what I just said.

Â The client query give me the distance.

Â It just returns, gives us v,

Â the index into the instance array and returns the distance.

Â And if the client asks for the path, then we make a stack.

Â 2:54

And then we use the variable e, which is a directed edge and we start at edgeTo[v].

Â And as long as it's not null, we push it onto the path.

Â And then go to the edgeTo entry for the vertex that we came from.

Â And that gives the vertex that we came from to get to that vertex to keep

Â going until we run out of vertices, which happens at the source and

Â then we turn the path.

Â And that's the interval that gives the client the path.

Â 3:27

So that's the implementation of the two query methods.

Â So now, what we're going to want to talk about for

Â the rest of the lecture is the algorithms that build these data structures.

Â Now all of the algorithms that we look at are based on a concept

Â called relaxation, edge relaxation.

Â So now recall there are data structures, all right?

Â So we're going to talk about relaxing an edge from v to w.

Â And we have an example here from v to w.

Â 4:01

And at the point that we're going to relax this edge,

Â we'll [COUGH] have our data structures in process in distTo.

Â We haven't seen all edges,

Â we haven't seen all paths in the intermediate part of some algorithm.

Â But we'll try to make sure that distTo[v], for every vertex,

Â is the length of the shortest known path to that vertex.

Â And that's going to be the same for w.

Â So these are all the edges that are in edgeTo that we know paths from

Â s to some vertex.

Â So distTo[v] and distTo[w] will contain the shortest known paths.

Â And also edgeTo[w] is the last edge in the shortest known path from s to w.

Â 4:54

And same way with edgeTo[v] of course.

Â Now, so to relax along the edge from v to w, essentially that means,

Â let's update the data structures to take that edge into account.

Â And what happens in this case is that the edge gives a better way to get to w.

Â So that's what relaxing is.

Â That edge gives us a new shortest path.

Â So we want to include it in the data structures.

Â So since it has a shorter path, we have to update both distTo[w] and edgeTo[w].

Â That is we have a new way to get to w, because we have to throw away that old

Â edge that came to w and we have a new shorter distance.

Â Instead of 7.2 that came out old way, we have 4.4 that gets us a new way.

Â So edge relaxation is a very natural operation.

Â When we consider a new edge, does it give a new shortest path to that vertex or not?

Â If it doesn't, we ignore it.

Â If it does, we update the data structures to include that edge and

Â forget about the old edge that took us to that vertex.

Â That's edge relaxation.

Â And this is the easy implementation of edge relaxation in code.

Â So to relax an edge, we pull out its from and two vertices in v and

Â w according to our standard idiom.

Â And then we just see if the distance to w that was the shortest

Â path before is bigger than the distance to v plus

Â the weight of the edge that would take us from v to w.

Â [COUGH] If it's bigger, that means we found a new path.

Â If it's less than or equal, we ignore it.

Â And if we found a new path, we have to update the distance to w to be the new

Â distance, distance to v plus follow vw.

Â And then we have to update edgeTo[w] and throw away the old version.

Â And say that our new edge from v to w is the best path to w as far as we know.

Â So that's easy code for edge relaxation.

Â 7:00

Now, [COUGH] we're going to use edge relaxation

Â in a really fundamental way to compute shortest paths.

Â But there's one other important idea,

Â which is called optimality conditions and

Â this is a way to know that we have shortest paths.

Â We have computed all the shortest paths.

Â The shortest path optimality conditions are embodied in this proposition.

Â So we have an edge weighted digraph and we have the distTo array.

Â Let's just talk about the distances and the paths go with the distances.

Â But the key point is that the distTo array represents shortest path

Â distances from the given source s if and only if these two conditions hold.

Â So the first thing is if it's the length for every vertex,

Â distTo[v] is the length of some path from s to the vertex.

Â And our algorithms will always ensure that.

Â And then the second thing is for every edge vw,

Â we have this condition that the distTo[w] that

Â we [COUGH] have stored is less than or

Â equal to distTo[v] + the weight of the edge from v to w.

Â That's the shortest path optimality conditions.

Â So sometimes they'll be equal.

Â For example, if vw is the last edge on the shortest path.

Â And sometimes, it'll be greater but it'll never be smaller.

Â It will never be a way to get to w that we haven't found.

Â That's the shortest path optimality condition.

Â And again, just a quick proof.

Â Although, the best way to understand proofs is to read them slowly

Â not listen to them spoken quickly but I'll quickly outline them.

Â 9:10

So we want to prove that if this is true, then we have shortest path.

Â So to do that, we assume the contrary.

Â Suppose that the distance to w is bigger than the distance to v + e.weight for

Â some edge.

Â Then that path is going to give a path to w.

Â That's shorter than distance w, because v is a path and the weight's shorter and

Â that is a contradiction to the idea that you have shortest paths.

Â So there can't be any such edge where that condition holds.

Â So that's necessary.

Â And then sufficient.

Â Suppose that we have the shortest path from s to w.

Â 9:58

Then [COUGH] we're assuming these conditions are all hold.

Â Then for every edge on the path, this has to all hold.

Â So if it starts at the end, the last edge goes from vk- 1 to vk.

Â So distance to vk is less than or

Â equal to the distance to vk- 1 + the weight of the k edge.

Â 10:29

And so just continuing down that way.

Â Then from the source to the first edge.

Â So the source + the weight of the first edge is greater or equal distance

Â to the first vertex after the source and all those conditions after old.

Â And then what we can do is just add up all those weights and simplify it.

Â And then that shows that the distance to w is

Â equal to the length of the shortest path.

Â And so it's gotta be the weight of the shortest path,

Â because it's the weight of some path and it's got that weight.

Â It's gotta be the weight of the shortest path.

Â So if those conditions hold, we have shortest paths.

Â So the point of this proof.

Â It's a slightly complicated proof but it's not too bad.

Â It's quite natural is that all we have to know.

Â 11:29

To check that,

Â we have computed shortest paths is that these optimality conditions hold.

Â To prove that an algorithm computes shortest paths,

Â we just have to prove that it ends up with the optimality conditions in force.

Â And that's what we'll be doing in the optimality condition.

Â Really, it's just saying there's no edge there that we missed.

Â [COUGH] Okay, so with that idea, in fact,

Â there is a very easy to state generic algorithm that is

Â going to compute shortest paths and it's very simple.

Â We start with the distance to the source being 0 on

Â the distance to every other vertex infinity.

Â And all we do is repeat until the optimality conditions are satisfied,

Â relax any edge.

Â Let's go ahead and relax edges until the optimality conditions are satisfied.

Â So that's a very general algorithm.

Â We don't say how to decide which edge to relax or

Â how to know the optimality conditions are satisfied.

Â But still, it's quite an amazingly simple generic algorithm.

Â 12:44

So how can we show that it computes the SPT?

Â Well, it's pretty simple.

Â Throughout the algorithm,

Â we're making sure that because the way that we do relaxed edges,

Â the distance to v is the length of a simple path from s to v.

Â And the edgeTo[v] is the last edge on that path.

Â That's what relaxation does for any vertex that we touch.

Â 13:19

And we've assumed that there's still a way to get to every vertex and

Â there's only a finite number of paths.

Â So it can decrease, at most, a finite number of times.

Â So the algorithms are going to terminate that's as simple as that.

Â 13:39

Again, this is a little bit of a mind-blowing concept.

Â But we'll leave that for more careful study and just for

Â now realize that all we have to do is relax along edges.

Â And what we're going to do now is look at different ways of

Â figuring out how to choose which edge to relax.

Â The first algorithm that we'll look at is a classic algorithm known as Dijkstra's

Â algorithm, one of the most famous of all algorithms.

Â 14:16

Then we'll look at an algorithm that

Â works even with negative weights as long as there aren't any directed cycles.

Â And then we'll look at an even older algorithm than Dijkstra,

Â the Bellman-Ford algorithm, that can solve the shortest path problem

Â in graphs with negative weights as long as there's no negative cycles.

Â