[MUSIC] In this video, we'll learn how to use big O bounds in practice to know if our search is fast enough or not. General steps in solving a problem could look as follows. First, you invent some solution. Then, you check if it's correct. Because there is no sense in estimating performance of an algorithm if it's just wrong. If it's correct, you should obtain some bigger bound, also done in our previous video. And the good thing is, in fact, we need only the general structure of the codes. So we don't need to implement at this stage. We could just keep the most important parts in our heads, or write them down on paper. When we have got the bounds, we need to somehow know, does your solution pass the time limits or not? If yes, then we can just implement the solution. But if no, we need to think of some other solution, or maybe obtain a better, bigger bound on the original solution. For now, let's concentrate on the first step. How to know if a solution is fast enough, if they have the big O bound. The big O bound is a function of some paremeter, like big O of m cubed, or big O of n times m. And these parameters are limited by the statement. So you could just take those limits and plug them inside the big O bound, this will give us some number. For big O of n cubed, and n is not greater than 100, you will get 1 million. And for big O of m times n, and n isn't greater than 10 to the 4th, m is greater than 1 million, you will get a number, 10 to the 10th. Of course, the actual number of operations is also this number multiplied by some constant hidden inside the big O notation. But we assume that the constant doesn't vary greatly from solution to solution. So for cost approximation, this number we've got is enough. We will just compare this number with the number of operations that could be run in a second. For a fast language like C++ or Java, this number is about several hundred million simple operations in a second. And thus, the Python's lower threshold for the test, about 10 million iterations. Of course, we should also consider the constant here in the side of big O. But you can just think that an average is something like several dozens. So after we get our number, we now could check if our solution is fast enough or not. If it's 1 million like in the first example, then it will most likely pass. Even if you multiply it by a large constant like 100, it still is less than the number of operations per second, if used with a fast language. What if this number is 10 to the 10th? Then it's no way it will pass, because even if the constant is only 1, which is nearly impossible in practice, you are still above the number of operations per second. So we've got a way to coarse estimate if our solution will pass or not. And in fact, nearly always, the system [INAUDIBLE]. But what to do if estimate is in between? Our number is not too small to surely pass and not too large to surely not pass, like down to the eight for the fast languages? But first, a bit more about the big O notation in general. Asymptotically, big O of n is better than big O of n squared. But that, in fact, doesn't consider constants at all. And what if the first is 1 million times n operations, and the second, 10 times n squared operations, and n is no greater than 100? So in the worst case, the first is count of million operations, and the second is only 100,000 operations. So the second is much better Also, you want to get a constant like in the practice. It's still important to remember that you're interested in estimating the number of operations, not in format or complexity. So it makes sense to count such factors even if they don't depend on the input. For example, if you have two loops, one iterates over n, and the inner iterates over all letters of the alphabet. The formula, it's O(n) as the alphabet has, always, size of 26 letters. But for the number of operations, you should use not the constant times n operations. But constant times 26 times number of operations. Thus, your prediction will be more accurate. When considering constants, it's useful to know that not all n operations are equally fast. Some of them are fast, like arithmetic operations with integers or local operations. But some are a bit slower, like taking remainders or making recursive calls or appending to lists, or using math functions like square root. And particularly slow are input-output operations. In fact, some ways to do input-output might not just be slow, but super slow. So late in the course, we'll cover how to efficiently perform input-output operation in different languages. Now let's consider the situation where we've estimated the general number of operations in the big O bound. And are still in doubt whether our solution will pass or not. Here it may be useful to think about the constant hidden behind the big O, is the constant big or small? If you have a few light operations, then you might assume that even a larger bound is fast enough, like 50 million. But if there are many operations inside the constants, or if they are heavy, you could receive a time exceeded verdict even with smaller values, like 10 million. Here, just consider all operations contributing to the running time. For example, taking square root is much slower than summing integers. But if in your program you have one square root and many additions, then it's no sense considering that all time is spent on additions. And having a square root doesn't really change anything. So next video, we'll talk more about how to identify frequent operations and how to make solutions faster.