In this video, we will talk about simplifying code. Most often, when we try to solve a task, we want to come up with at least something. Until we have no solutions, we try to find any of them, and the first solution we came up with is not always the simplest one. That's why it is always worth to think if your solution can be simplified. But at some point, you stop thinking about the solution and start thinking about its code and writing it. And certainly, the very first code you write is not always the easiest and the cleanest one. If your code works and passes all tests, all the results doesn't matter and you should think about other tasks. But if not, getting into thoughts about simplifying is a good idea. Consider a simple example: Suppose that in some task your goals are to do something with some integer numbers, for example, some app or print, and you've thought about the task in this way. I am interested in several numbers, so a loop is needed here. I don't need numbers that are not less than 10, so I can stop when I see 10. I should skip numbers that are less than five. By this, I illustrate a consistent approach to solving the task. You've got to try understand what is required in this task and come to the solution. This logic is acceptable. However, when you understand the whole solution, you can make it and its code simpler. In our example, you can write a usual for loop. Both codes do the same, but it's easier to analyze and debug the second code. So we see that the same idea can be implemented in different ways. It is important not to tie yourself to an already written code, but change the structure of code when it is reasonable. In our example, we didn't change the idea of the code. Now, imagine that you are in another situation, one of your functions doesn't work and it is hard to debug it, but at the same time, you know that you can fully rewrite function and it will become clearer, so why not do it? Competitive programmers sometimes rewrite the whole solution because they realize that they are totally stuck in it. Thus, we exploit a simple truth, deleting all code removes all bugs in it. You may find it strange, but the shortest code is not necessarily the clearest code. Consider a simple task: There are n floors in the house. There are n apartments on each floor, two of them are corner apartments, other n minus two apartments are usual apartments. There are three windows in each corner apartment and two windows in each of the other apartments. Your goal is to compute the total number of windows in the house. So, in this task, it is required to find only one formula. You can come up with the following formula: n, number of floors, by n minus two, number of usual apartments on each floor, by two, number of windows, plus two, number of corner apartments on each floor, by three, number of windows. This formula obviously gives the correct answer, but we can also remove all parenthesis and simplify this formula. Then we get two by square of n plus two by n, then your formula looks more compact. However, when we see the first formula in the code, it is easier to understand where it came from and why does it look like this. But every time you see the second formula in the code, you need to remember where it came from and convince yourself that it is correct. So here, we have gained the length but lost the understanding, which is far more important. Let's look at the idea of simplification on the other hand. No task has a unique solution, every problem can be solved in different ways. Solutions have different advantages, some of them are faster, some of them are simpler. In fact, it can be difficult to find a solution that fits in the time limit but is not incredibly complex. You'll always need to look for a balance. But now, we can see there's a situation in which only one solution fits in the time limit. Imagine that in some task you need to sort an array of size n, for example, an array of integers, and you know two algorithms that can sort an array. First algorithm, name it fastSort, performs big O of n log n operations but is hard to implement. Second algorithm, name it easySort, performs big O of n-squared operations but is very simple. An example of the first algorithm is quick sort. An example of the second algorithm is bubble sort. Suppose that only fastSort fits in the time limit, for example, if n equals one million, fastSorting is working in a reasonable time, but easySort works very long. So we used fastSort but your code doesn't work. Like before, assume that you have worked a test on which your code gives the wrong answer and you don't know, should you look for bugs in the sorting algorithm or in other places? To understand this, you can replace fastSort with easySort. Since these algorithms do the same thing, nothing more is needed to change. If your code still doesn't work after this, it means that you should find bugs in other parts of code. If your code works, it means that your implementation of the hard algorithm was incorrect. Even if such a replacement didn't help to find the bugs, you can debug simplified code instead of making changes, probably it will be more comfortable, but don't forget to replace easySort with fastSort after debugging. A possible pitfall here is that you could accidentally write codes of easy algorithm incorrectly. It can be very confusing for you because you can start looking for bugs in the hard algorithm, and that may not be there. So you should be very careful in simplifying in this case. By the way, the same idea is used for stress-testing. We will cover it later in the course. You might also notice that brute force algorithms can almost always play the role of simple algorithm. But remember that sometimes, brute forces are very hard to implement, and in such cases, you shouldn't use them.