From the previous video we know how awesome the double type works. But let's compare it with a 64 bit integer, the same size. Of course, not everything could be done with integers but where they suffice, they turn out to be much better than doubles. First, Double has less bits for actual digits, 53 versus 64. As the exponent should be stored separately. Doubles could also be a bit slower than integers, one and a half or two times. The good thing is that doubles aren't susceptible to overflow because the fractional part is always the first digits. There could be overflow in the exponent, but it will happen at magnitude of about 10 to the power of 300. So in practice, this is really not the problem. But the worse thing is that in doubles there are always errors. We've already seen how the values like square root of two, or two thirds, are stored with errors. But even decimal fractions, they will also have errors. For example if you are at 0.1 and 0.2, you won't get exactly 0.3. Of course, it's much nicer with integers where all values are exact. So you should use floating point numbers only when they are really necessary. An when integers suffice, use integers because with them you don't need to treat errors. The first example when doubles are not needed is with the rational numbers. As we've seen we could store the numerator and denominator as integers, but there is a problem of overflow. Second, if you need to work with decimal fractions where the number of digits after the point is fixed, you could just think of them as integers. Multiply it by the corresponding power of 10. For example if you work with prices like $2.49 you just think of it as 249 cents. There are situations, where square roots are necessary but you don't always need to explicitly compute them especially if you are just to compare them. For example, if you wanted to iterate over all integers which are less than square root of n, in a loop you could replace the statement i is less than the square root of n by the equal statement, i squared is less than n. If the numbers are non-negative, then the statements are equivalent, but the second is always integers. And another example with roots, if you need to compare length and the coordinates are integers, you might just compare the squares of them. The length is like a square root of X squared plus Y squared. And its square is X squared plus Y squared, an integer. And as lengths are non-negative, the square root is less than square root is equal to the just the value is less than value. However, sometimes you can't get away with integers and are forced to use doubles. The most common case is when you need to output some value which is not an integer. And also the statement should specify with how much an error you could output the value. Because we know that with doubles there are always errors. Usually it sounds like the absolute or the relative error should not be greater than 10 to the minus six. It means that if you output the value which has either the absolute difference to the actual value or the relative difference not bigger than 10 to the minus six, then this value will be considered as correct answer. And then the pitfall there, is that your number may have enough precision, but on the output it could be automatically rounded and so have error more than needed. So it's safer to always output some fixed number of digits after the point. On the slide, you could see how to do this in C++, Java, and Python. However, even when doubles are absolutely necessary, you should still try to stay in integers for as as long as possible because the less floating point operations the less errors. First example, say you need to sum three fractions. The straightforward way is just to divide each to a double and then sum them up, that will be five floating point operations. But there is another way, you could just sum them as rational fractions, and only at the end divide the integers to a double, and that'll be one floating point operation. A similar example we need to do three divisions in a row. You could just divide in doubles and so there'll be three divisions each a floating point operation, but you could also multiply all divisors and then divide by this value. Because divisors are integers, only one operation will be floating point. Now, imagine you need to calculate a square root of something, multiplied by some coefficients. You could just take a root in double then multiply by the first then by the second. That will be three floating point operations, but you could also take everything inside the square root, multiply that as an integer and only after that, take a square root and it will be only one floating point operation. On the other hand, with integers there could also be overflow. So you need to watch for that and switch to doubles where it could happen. So if you decided to use doubles you need to know how to do it correctly. Due to the errors, some usual things do not work, for example, the equality operator tests if the values are exactly equal and even small errors render it practically useless. The solution is, let's consider A and B equal if their absolute difference is no greater than some small value epsilon. This way even even with small errors, we would consider equal number, equal values as equal. The other comparisons also need to change. For example A is strictly less than B should become A is than less than B minus epsilon. Because the numbers from B minus epsilon to B are now considered equal to B. A is less than equal to B should become A is less than B plus epsilon because the numbers from B to B plus epsilon also considered to B now. For now we learned about equals to, is strictly less than, and this is less than or equal to equal to but is greater than is defined symmetrically. Using epsilon to compare is necessary but it also inreases errors. So, you might want to just use A is less than B, if it's good enough. For example if you're sorting values and it doesn't matter if some are close or not, you might just use a standard less than operator and so reduce errors by some. There are other operations which are bad with errors. For example rounding, floor which rounds the value to the nearest smaller integer and ceil which rounds to the nearest greater integer. The problem here is that even a small change of argument could change the result by one. The floor of one is one, but the floor of one minus one billionth is zero. So, if our value is an integer, but we store it with some small negative error, we would get our value minus one if we use floor and that is bad. So, you may write instead of floor of a just floor of a plus epsilon, and if the epsilon is big enough it will overcome all errors. And if it's sufficiently small, they'll never be such that A is less than some integer but A plus epsilon is greater than it. So we need the epsilon constant for comparisons and rounding. But how to choose that constant? In fact we can just take each operation in our program and carefully bound the error which arises here. In the end we could take the highest of the bounds and take it for the value of epsilon. Then all our errors will be no greater than epsilon and it is likely to be sufficiently small. But it will be tedious and very time consuming, so it's hardly ever done on the contest. Instead people just take some feasible value like 10 to the minus eight or 10 to the minus nine and hope that it works. What if you set some value of epsilon and it doesn't work? Using debug, you could find the first place where the error appears. It may be that the bug is completely unrelated to the epsilon. But if it is, it should be something like comparing two values which are calculated correctly but compared incorrectly. Then it's one of the cases. Either unequal values are treated as equal and that means our epsilon is too big. We should make it smaller. For example, we could take the next power of 10 and see if it's small enough. Or the errors got too high and equal values are treated as unequal. That means our epsilon is too small and we should try to increase it. We could take the next power of 10 and see if it works. So, we've learned to use doubles only when they're really needed. And we've learned to overcome errors by using epsilon. In the next video, we will cover some more practical issues.