In the last video, we talked about an edgeless implementation of a graph. Here we're going to do an entirely different implementation of a graph using an adjacency matrix. Let's look at how this works. Using the same simple graph we saw in last video, here we're going to maintain the same type of data structure. We're going to maintain a vertex list and this vertex list is going to maintain all of the vertices inside of our data structure u, v, w, and z just the same. We'll maintain this as the hash table to get that O of one access time. The other thing we're going to maintain is we're going to still maintain our edge list. But that's not going to be it. In addition to maintaining the edge list, we're also going to be maintaining an adjacency matrix. Let's fill the adjacency matrix first. The adjacency matrix is going to store a false value if there does not exist an edge between two vertices. So between u and u, there does not exist an edge because there are no self edges. So we know that this is a false value or zero. Between u and v, we know that that edge does exist so we can say that's a true value or one. Between u and w, we know that edge exists as well. Between u and z, we know that edge does not exist so it's zero. We continue this. V and v does not exist in edge. V and w does contain an edge. V and z does not contain an edge and finally z and w does contain the edge d. We can just do the upper triangle part of this matrix because we know that the graph here is undirected so we can just replicate all of this down to the bottom triangle but we won't worry about doing that since all we need to do is represent it once here in the upper triangle for a non-directional graph. So, while we represent everything with zeros and ones, I think we can do better than that in our adjacency matrix. So, zero happens to be the exact same value as null but one, we cannot just say one but we could link directly to the actual edge in our edgeless structure. So let's see how this works. For vertex u v, we know u and v is connected by the edge a. So instead of having a one here, we're going to have a pointer that points directly to my edge list which contains u which is the edge between u and v. Likewise, u w is the edge c that connects u and w. Finally v and w is the edge b, I have a pointer to there that contains the connections between the vertices v and w and finally z w is this edge. It's going to be edge d connecting w and z. So here we have a adjacency matrix that doesn't just include whether or not that edge is connected to those between those two vertices but we have a data structure that actually links us into the edge list itself. So, what we can do is we can see that these operations are going to become much much faster than some of the operations beforehand because we can find out what all the incident edges are far faster than we could have by just going through the entire edge list. Let's talk about this running times. To insert a vertex into this still requires us to insert a vertex into our edge list so that it's going to be O of one insert. But it also requires us to add another row and column to the matrix. Because of that, we need to write out a bunch of data corresponding to the number of vertices in the graph. So now this becomes an O of n algorithm because we have to have a row and a column to correspond the new vertex with every other existing vertex on our table. So now adding a new vertex depends on the total number of vertices that are already in our graph. The same argument in reverse can be made for remove vertex where we need to remove an element from this graph and shrink the table if necessary. Now checking if two vertices are adjacent, v_1 and v_2 can simply look up into this table extremely quickly. So, if we know we have vertex v and vertex w, to index into this table is extremely fast operation. In fact it's just looking at data in an array. So, to find out whether two nodes are adjacent, we can do that in constant time now by the data structure that we maintain. So our adjacent now run an O of one time. Finally, the very last thing to mention is incident edges is going to be the set of all edges incident to a particular vertex. Here, we can simply look at an entire row, an entire column in our data structure. Let's take a look. Here the entire row for v with the entire column for v is going to know every presence there. Every time there's a true value, we know about an incident vertex. So here, we know that v and u are incident as well as v and w. So looking at the two true values, we're able to get the number of incident edges and doing that requires just looking at the set of all of the vertices twice. So again this is two times n checks which is equal to O to n. So what we have with an adjacency matrix implementation is we have an implementation of this matrix such that we're able to optimize whether or not we can check if two vertices are adjacent or not, and we can do that check in O of one time. This is an amazing result if the algorithm we care about is going to care about whether or not two nodes are connected. The cost of this is an additional runtime when we insert a vertex and remove a vertex so we see that we have our first trade-off between the different algorithms that we have. We can see different applications are going to want different implementations of the graph. In the next video, we'll talk about a third and final implementation of a graph that has another set of trade-offs then we'll explore an analysis of all three of these algorithms at once. So, I'll see you then.