In the last couple of lectures, we've been talking about algorithm analysis, and big-O notation, and that probably still feels pretty abstract to you. So in this lecture, we'll actually go through some concrete examples. The big idea here with our algorithm analysis is let's say we actually came up with a math equation of function for how long our particular algorithm would take to execute, like the example I gave you. The first thing we do is we throw away all the numbers. All we keep our n, any term that has an n in it. Once we do that, we can just pick the highest order n term, the term that will dominate the rest. So n squared grows much faster than n log n which grows faster than n. So we can throw everything away except for the n squared in the example. So that example, was order n squared. Okay, so some examples. We'll start at the lowest cost or fastest complexity, and that's constant time. That's order one, is the way we specify it takes constant time. Now, that doesn't mean no time, it doesn't mean instantaneous, it just means that the time it takes to do it has no dependence whatsoever on n. So one good example of this is accessing a specific element in an array. So it doesn't matter if our array has one thing in it or 20 thousand things in it, accessing a particular element takes constant time. Because what we do is we take the index, we multiply it by how large each element is, and then we add it to the base address of the array. So it doesn't matter how big the array is. We do a multiplication, we do an addition, and there we are with the address of the element we're trying to access. So that's a constant time operation where the time it takes to do that operation is not dependent on n. The next one up and there are actually some more obscure complexities in-between the ones we're talking about, but I'm talking about the most common ones. The next one up is order log n, and that's called logarithmic. This log by the way is log base two, is not log base ten, or natural log, or anything like that, it's log base two. A good example of that is binary search. Let's walk through a binary search example, so you can see why it's order log n. Here's our binary search example. We're going to look for the number three in this list of numbers that's right next to it. There are 11 items in that list, we'll come back to that comment in a few minutes. So what we do is we check the middle of the list. Now, it's important to note that this list has to be sorted in either ascending or descending order. We can do a binary search on the either sorting order, but most people will sort lists in ascending order so that's how we did it here. So we check the middle of the list, and it's 21. Three is less than 21. Now, because the list is sorted, we know that if three appears in the list, then it's going to be the left of 21. So we split the list in half, and we now look for three in that list 1,3,5,12,17. We check the middle of that list, and that's five. We know three is less than five, so we look for three in one and three. Now we check the middle of the list, and you may well say well there is no middle there's only two values. So we have to consistently decide as we run our binary search algorithm, to either round down or round up. I'm going to round down because when we implement this, that's how it will work. So now we're going to compare three to one, and three is greater than one. So we're going to look for three in the list of one element three, and we found it. Of course if that number had been four, we wouldn't have found it, but we would've been done anyway because we got down to a list of one, and that list of one wasn't three. But in this case we found it and that's great. Because we split the list in half every single time, this is an order log n algorithm, log base two not log base ten or natural log or anything like that, but we only have to do for this list of 11 items. Log base two is slightly higher than three, but we'll have to say four, and it turns out that we did four comparisons here. We checked against 21, and five, and one, and three. So as you can see the number of comparisons that we need to do when we do a binary search, is order log n. Now, it's important to note that because each comparison takes a certain amount of time, and because depending on how the rounding works we might end up doing one additional comparison, the actual equation for this might be something like 3 log n plus 3. The important things to know are we don't care about constants, because if we can have an upper bound on top of our function that's 20 log n, then we don't care that we had 3 log n plus 3, it's order log n. So the great thing about algorithm analysis is any actual number we end up with, we can just throw away when we're converting to big O notation. So this is an order log n algorithm, even though if we were precisely trying to calculate how many clock cycles it would take, it might be 3 log n plus 3 for example, but we don't care because we throw away the numbers. We have three more examples to at least briefly discussed. The next one is order n. So this is linear in n, and the best example of this is doing a linear search. Let's say you were looking for a particular card in the deck of cards, and you didn't have the cards in any order. So to look for a particular card, you have to look at the first one, and if it's not the card look at the second one, and if it's not the card to look at a third one and so on. In the worst case, remember doing worst-case analysis here. In the worst case, we might find the card we're looking for at the very end of the deck, or we might not find it at all. In both of those cases, we had to look at n cards where n is defined as the number of the cards in the deck we're searching to do our linear search. So linear search is an order n operation. The next one is something called log-linear. It's order n log n, and we see this most commonly in various sorting algorithms. So there's something called merge-sort and there are many sorting algorithms that are n log n, but merge-sort is one such algorithm that actually takes n times log n time to do. Now the interesting thing about it is that we already saw log n with binary search, and we saw that it was log n because we split something in half each time. The way merge sort works is we split something in half each time, and that's where the log n comes from. But then we have to actually merge these sub-lists that have been sorted back together, and at the end that merge actually takes n time. So that's why we get n log n. Finally, quadratic means order n-squared. We see order n-squared and lots of basic sorting algorithms like bubble sort, an insertion sort, and selection sort. We're actually not covering sorting algorithms in this class, but you will probably come across sorting algorithms somewhere in your game programming career. So these last three are the highest we'll go with the practical things that we'll do in programming, there are other algorithms that are even worse. So we won't worry about those. Thank goodness. The number of loop iterations that tends to be the key thing in our algorithm analysis, and we'll see that a lot as we actually continue with our algorithm analysis work. To recap in this lecture, we saw some algorithm analysis examples that actually used big O notation. From this point forward, we'll do our algorithm analysis on C source code. Because C is just a language that we use to express our algorithms. Those step-by-step plans for solving a problem are the code we write. So we'll do our analysis based on the actual source code that implements the algorithms that we want to analyze.