In the previous lecture, we learned what algorithm analysis is and why we care. In this lecture, we'll learn about something called Big O notation. So what problem are we trying to solve? The problem is that we want to understand the complexity, how long it takes or how much memory it takes for a particular algorithm to execute. What do we do about that? Well, we do algorithm analysis. But at this point, all we know is we could say wow, it's really fast or G it's really slow. That's not precise enough for a science like computer science, so what we need to do is we need to figure out a way, how do we specify what we mean by the complexity of a particular algorithm. What we do is we find a function that will serve as the upper bound for the performance of a particular algorithm. So this is regularly in terms of something like the number of things in a list, which we call n, so we try to figure out a function in terms of n that will always be above how long it takes for our particular algorithm to run. As I mentioned in the previous lecture and as you see above, sometimes we evaluate memory, but regularly we're talking about time. So now we know we need a function that serves as the upper bound, and there ought to be a notation we can use to specify that function. Of course there is, and that notation is called Big O notation. So here's a number of functions that could represent the upper bound and the time it takes a particular algorithm to run. The horizontal axis is n, where n is the number of data points we're trying to process with our algorithm, and the vertical axis is time. So this certainly could be space if we were doing algorithm analysis for space requirements, but let's just focus on time until I mention we might want to do otherwise. So as you can see, we're using Big O notation, so order 1 means constant time, a constant time algorithm. So there's a line there that is just set at three for time, I didn't put any markings on the vertical axis, but I'll tell you I know that that is a constant of three, three clock cycles, three seconds, three whatever, it doesn't matter. Something might only take a particular amount of time, no matter how large the dataset is. We also have order log n, and that is log base two of n, not log base 10 of n, or natural log, or anything like that. It's log base two of n, so that's the green line, and then we have the line marked order n, so that's a linear function, then we have the order n log n, teal colored line on the toward the left, and that is called log-linear, and then finally we have a quadratic function that black curve that climbs pretty steeply in there that says order n squared. So really what we're going to figure out with some approximation because we're going to ignore lots of details and only really pay attention to the contribution of n to the complexity of our algorithm. But we will figure out whether the particular algorithm we're looking at always falls below that order n line for example. If it does, then it's an order n algorithm, unless it's a normal login algorithms. So we find the tightest bound using one log n, n log n or n squared, to figure out how complex our algorithm is. Don't worry, we'll do some examples. To recap, in this lecture, we learned about Big O notation.