We are now ready to answer the question, are there fragile regions in the human genome? Here is a theorem, yes! And here is the proof. If the Random Breakage Model is correct, then N rearrangements applied to circular chromosomes will produce approximately 2N synteny blocks. And therefore since there are 280 human-mouse synteny blocks, there must have been approximately 280 over 2 which is 140 2-breaks on the human mouse evolutionary path. But we have just learned from the 2-Break Distance Theorem that there are at least 245 2-breaks on this path. My next question is 245 equal to 140? It turns out that 245 is much larger than 140, even if you apply rigorous statistic allowing for some differences between these numbers, which means that we arrive at the contradiction, implying that one of our assumptions, in the course of proving this theorem, is incorrect. But which one? Well, there is not much choice, because we have made only one assumption. Here it is. If the Random Breakage Model is correct, and since we based on this assumption we arrived to a contradiction, it means that our assumption was incorrect. Which implies that Random Breakage Model failed. Well, we demonstrated that Random Breakage Model is incorrect but of course we have not provided you with the location of fragile regions in the human genome and it is an example of a non constructive proof when we prove that something is wrong but cannot provide a certificate showing where the fragile regions are located. But if the random breakage model isn't correct, how would you explain the remarkable fit between the short length of synteny blocks and the exponential distribution? Well, why have biologists embraced random breakage model? Actually it was a logical fallacy. RBM, Random Breakage Model, is not the only model that complies with the exponential distribution test, so it is correct to claim the Random Breakage Model implies exponential distribution. But it is absolutely incorrect to claim that exponential distribution implies the Random Breakage Model. So why was Random Breakage Model refuted? Because it doesn't comply with the observed breakpoint reuse. And the question is, is there a model that complies with both the exponential distribution and the breakpoint reuse phenomena? And it turned out that so called Fragile Breakage Model indeed explained both phenomena. According to the fragile breakage model, our genomes represent a mosaic or fragile regions with this high propensity for arrangements and solid region that hardly ever broken by their arrangements. And fragile regions which means regions between consecutive synteny blocks are short, accounting, let's say in the case of human genome for less than 5% of our genes. Now, the question of course arises, how FBM, Far Fragile Breakage Model, explained both exponential distribution and the rearrangement hotspots. Well, a small number of short fragile regions explain rearrangement hotspots because there are very few regions where rearrangements actually can happen. Therefore, they are bound to be repeated in the same places. Also, if fragile regions are somewhat randomly distributed throughout the genome, then the synteny blocks lands follow exactly the same exponential distribution, as in the case of the Random Breakage Model. And therefore, the Fragile Breakage Model explained both exponential distribution and breakpoint reuse. And should be used as a substitute for the random breakage model. But, maybe fragile breakage model uses the same logical fallacy that the random breakage model uses, because maybe, there is a test that was random breakage model and fragile breakage model fail. It turn out very recently that indeed there is such a test. And this test became possible because many genomes have been sequenced in recent years, and scientists were able to analyze breakpoints not only between two genomes, human and mouse, the first sequenced genome, but between multiple genomes. And they proposed so-called Multiply Breakpoint Reuse test which analyzes breakpoints across multiply genomes. Unfortunately, we don't have time to go in the details of the Multiply Break Point reuse test, but I can only tell you what follows from a study that analyzes that. This recent study revealed evidence for births and deaths of the fragile regions, which means that, fragile regions exist but they're not static. They move to different locations in human and mouse genome and the positions of the fragile regions in the human genome are different than the positions of the fragile regions in mouse genome. And this discovery resulted in the Turnover Fragile Breakage Model which allows for changes, in the location of fragile regions. And this new Turnover Fragile Breakage Model actually complies with a new Multiply Breakpoint Reuse Test. Moreover, it actually points to the location of the currently fragile regions. They're questions that we were not able to answer when we were proving the last theorem from the last section. So Turnover Fragile Breakage Model actually complies with all three tests. Does it mean that's it's absolutely final model for chromosome evolution? Of course not, there's always possibilities that future studies will reveal new and new tests. And will lead to further development of the of our views of chromosome evolution. And with Turnover Fragile Breakage Model developed, there are still questions that remain to be answered. Where are the fragile regions located? And what causes fragility? And availability of multiplied genomes will help us eventually to answer this question and to figure out what are the currently active fragile regions in the human genome? What will be maybe the next reversal that happened in the human genome. Let's say the next million years. And now, after we cover a different algorithmic aspect of genome rearrangement, the time has come to answer one of the first questions that appeared in this lesson. How to generate synteny blocks starting from long genomes?