[MUSIC] Welcome to the second section of the second lesson. Today, we will say a few words about the mathematical model. And this is very important because models are used everywhere, in present technology and in science. You have mathematical models, statistical model, and you find them everywhere. One of the issue we would have in this lesson is that the model can be used for different things, for studying processes, or for making predictions. And what is now seen as a major problem is that when the two users are compounded, and we will try to explain what we mean by that. The main issue is that cheating with models is also very easy. This is another reason why we need a very careful quality arrangement methodologies and procedures for using mathematical model. We mentioned that there is a crisis in science. You heard about the reproducibility crisis, and you may have heard that the use of, for instance, statistical procedure for significance. The simple p test is one of the main culprit of the non-reproducibility of many experiments, and this has led many to talk about the misuse of statistics. Here, you have an article talking about statistical sausage factory. So we can say that there is a kind of a storm about malpractices in mathematical and statistical model. And as a result, there is a key issue of trust. In this case, it may happen as certain stakeholders reject the use of the mathematical model. Here, we have the International Food Program Research Institute, who was about to propose spending cuts a million dollar to develop a mathematical model to study food security scenario. But the stakeholders, in this case, Green Peace, said no, because anyhow, if you use this model, it won't be transparent, and it won't be helpful. We don't want this model, so there is clearly an issue of trust. So in a sense, mathematical model is vulnerable to critique, but why so? Is it so vulnerable because it's very easy to garbage in garbage out with data, or because it's very easy to tweak toward the desired end? Or is it because it is sloppy? So that different mathematical model can be compatible with the same set of data. Or is it because models are platonic? And as I said, or is it because they are used out of context? Okay, let's look at this issue, in turn. Here, we have the issue of garbage in, garbage out, which is interpreted in a slightly different way. What we mean here by garbage in, garbage out, is that very open model you can make predictions which are uncertain. And one way to produce a very clear prediction and very sharp prediction is to compress the uncertainty in the input. And this is garbage in, garbage out. So this is, in a sense, when you achieve precision by ignoring uncertainty. And we have here two different definitions of the same process. One coming from Funtowicz and Ravetz, and one coming from an econometrician. But if you look at the text, the wording is fairly similar. Here we have a case which is quite interesting, because it's a real story, it's a real case. This is a case of the contamination of aquifers near Copenhagen, and the regulatory authority had commissioned five different teams of experts to calculate, to model, and to map the contamination in different parts of this area. And the surprising result is that the five models, the five consultants return totally different assessment of contamination. I'll let you this on so that you can appreciate that it's really difficult to find a pattern of contamination in these five different plots. I borrowed this slide from my friend, Jeroen van der Sluijs, and this as well. So what do you do when you are confronted with something like this? You can be a Bayesian, and you say, okay, we take this as prior. We get new data, and we update them on the basis of the new evidence. But unfortunately, we have no data. Or you can use the IPCC approach, the International Panel for Climate Change. You lock people in the room, and you let them boil until they reach an agreement. Or you have the nihilistic approach. You throw away all these models and take decision based on other consideration. Or maybe you are precautionary, and you protect all grid cells by, for instance, forbidding cultivation, but then you would have many angry farmers to discuss with. Or you could have a bureaucratic approach weighting the expert by their citation index. You could select a consultant you personally trust the most or you could have a real life approach and select the consultant which fits your policy agenda. The post normal approach for dealing with this situation would be to explore how useful it is to know that we are so ignorant about the issue and then use this information about our ignorance to reach discussing deliberatively. As we said, with the stakeholders within the imperfection of our knowledge what to do in this case. I mentioned the platonic idea. Nassim Taleb brought many successful book. One is Black Swan, shown here, and is also very famous. And he has been a long-term advocate of more careful attention in the use of statistical distributions and mathematical models. And it is opinion, mathematical models are dangerous because they Platonify the reality. Meaning by this, we substitute reality with an idea, with a Platonic idea. It's very elegant, but very often, you miss most of the picture. If you read the book of Taleb, you would grasp more vividly what he means by that. Let's say, defend model, saying that models are indeed very useful, and they are, in fact. And they are especially useful because they act as blinders, what you put on horses. And by wearing these blinders, you focus on something. So you zoom on a particular variable you want to study, or process. And assuming that anything else moves around or varies, together with your own variable, you see what the aspect is. Unfortunately, when you do this, you use what the economists call the caeteris paribus assumption, which means, translated from Latin, everything else being equal. A big problem is that, especially in economics, caeteris are never paribus. So if you act on a variable, such as wages, this will influence prices, and this will influence everything else. So one thing, if you use this model for making a study of the impact of a particular shock on the economy, another thing, if you try to use this to make prediction about what will really happen to the same economy. The discussion I made is particularly relevant for the nonstochastic general equilibrium model, which is one of the main tool used in microeconomics. And here, we have a part of a book written by Philip Mirowski on discussing how it works that after the last financial crisis, everyone was shocked that none of those model could predict its arrival. And here you have the wording of a hearing, which was held in the USA, to ask why these theoretical tools were used as policy prediction tools. And if you are curious, and you go and read the book, you will see that a similar episode, what happened in the UK when the queen summoned all the economists of the London School of Economics. In fact, once you say that models are better at falsification than at confirmation, so it's easier to use a model to say that something cannot be done, that we use a model to say that something is done. And that very often, models are used in a kind of rhetoric setting. Rhetoric setting here, this is a very nice example from Nobel prize, Kenneth Arrow, where he was requested to forecast whether two weeks ahead. And he had informed his superior, he was in the army, that this didn't lead to any useful information. And the reply he got that, of course, that doesn't matter because we use, anyhow, your prediction for planning purposes. So this would be a purely ritual use of mathematical modeling. The point is that as Naomi Oreskes has mentioned, we should not treat a mathematical model as a scientific hypothesis, which can be then falsified by subsequent observations. Why is it so? Because what makes a hypothesis useful, full of proper logic, is a capacity to refute the hypothesis, making the experiment, which would either confirm or reject the hypothesis. But when the theory is not the theory, or the hypothesis, but a mathematical model, how can I refute them, the mathematical model? What do I refute on the mathematical model, and more in detail, even that the model includes the laws in governing equation, algorithm approximation of various kind up all the way to possible code error. When a model does not conform to group observation, what do we throw away? What was thrown in the model? That's why it's so difficult to use model as hypothesis in a kind of hypothetico-deductive scientific methodology. And this concludes our short chat about the discussion of the vulnerability of mathematical model.