[MUSIC] In this lecture we'll talk about philosophy of science. I'll give a brief overview of the history of philosophy of science. Now, this is a topic that you can spend an entire course on, and if you're interested, I highly recommend you do this. But you should at least know the basics. Can we ever have something that's objectively true? Do we know whether something is a science or not? These are important questions to consider. Now, if we ask ourselves the question, what makes something scientific? Then we are asking, what are the demarcation criteria? What differentiates between science and pseudoscience? Do you think that astrology is a scientific discipline? Should we use tax money to fund research on astrology? Do you think that precognition, studying whether we can predict the future or not, is a science and should we fund it based on tax money? Do you think that my own discipline, experimental psychology, is a science and should we fund this? You see that the question whether something is a science or not has real life consequences. If we think about demarcation criteria, then different people have different opinions about whether we can say that there's a difference between science and pseudoscience. Popper says that, "If a theory is falsifiable, then it's scientific", and this is the demarcation criteria according to Popper. But not everybody agrees. For example, Lakatos says that a given fact is explained scientifically only if a new fact is predicted with it. You should not just describe things that you see, but your new descriptions are scientific explanations when they can also predict new things. When they can explain new information. There are also people who give up on trying to differentiate between science and pseudoscience. A well known example is Paul Feyerabend, he says that given any rule, there are always circumstances when it is advisable not only to ignore the rule, but to adopt its opposite. So there's never rule that you can follow that will differentiate between science and pseudoscience, anything goes. If we draw inferences from data, then there are logical ways to do this, and illogical ways to do this. In propositional logic, one valid rule of inference is known as the Modus Tollens. In other words, denying the consequent. Let's take a look. If the Modus Tollens is used we have a line of reasoning that goes as this: if p then q, not q, therefore not p. Let's make it a little bit more concrete. If we have a theory, then we should observe some data. Then we collect some data but we do not observe the data that we predicted. From this we can deny the consequent, therefore not the theory. A very often illogical way of drawing inferences is known as affirming the consequent. Now, this is not a valid rule of inference, although it's tempting to use. It goes like this, if p then q, we observe q, therefore p. Now this might look like it's logical but it isn't. Let's look at some more detail. If we have a theory then we should observe some data, we observe the data, and then we conclude that therefore the theory is true. Now this might still look like it's valid but it isn't. Let's make it even more concrete. Let's say that I conclude that if I'm a man then I'm also a human. I observe that I am a human and then I conclude that therefore I must be a man, but this last step is not valid. I could also be a woman or anything in between. A theory can be either refuted or corroborated but you can never prove a theory. Now this is important. We can never have certain knowledge. We can aim for it, but we can never achieve it. It's easier to refute a theory and this is based on Popper's idea of falsification. So you can have a conclusion where you falsify a prediction. For example, if we look at the prediction, all swans are white, now, no number of sightings of white swans can ever prove the theory that all swans are white. However, if we only see one black swan, this will disprove our theory that all swans are white. So you see that falsification is a very potent tool to reject theories. Now we can never accept them, we can observe observations that are in line with our theory but we can never prove it. However, when we use inferences based on statistics, then falsifications are never really black or white, as the case with black swans. If you see a black swan and you have a clear view, then you can be pretty sure that this is not a white swan. You might even get a feather and try to wash it, see that the black doesn't come off, and then you have a solid observation that black swans exist. In statistics we always work with probabilities. We have a certain probability that the data we have observed, is surprising, for example. We have a certain probability that there is no true effect. Now this kind of statements are never completely black or white. So it's always more difficult to falsify a prediction. Furthermore, we wonder whether we actually reject theories after a single falsification. And Lakatos argues that we never do this. People have some sort of feeling for a theory. They like a specific theory, and they want it to be true. And this is perfectly fine, you can try to find support for it in different ways. Now, there are good reasons to stick with a theory. When a theory makes a very, very good predictions then a single observation that's not in line with your predictions is no reason to give up the entire theory. This statement by Stevens is rather nice. It says, "The lesson of history is that a bold and plausible theory that fills a scientific need is seldom broken by the impact of contrary facts and arguments. Only with an alternative theory can we hope to displace a defective one." So it's perfectly fine if you sometimes observe a finding that's not in line with theoretical predictions. You don't have to throw your theory out of the window. If you have a strong theory that makes good predictions in other fields, there's no reason to give it up until you have a better theory. So if there's a falsification, if one of these predictions doesn't hold up then what is exactly falsified? Is it the theory itself, or there are auxiliary theories, assumptions that you make about the data collection process, for example. Now let's say you do an experiment on some people and you have a theoretical prediction. And it's not supported by the data. Now you can either distrust your theory itself or you can say, well maybe something else that I assumed was going on, didn't happen as I predicted. For example, maybe your participants were very, very tired. They didn't process all the information you provided. And as a consequence, you didn't observe the effect that you predicted. Now in these situations, you might not throw away your theory. You might say, "Well some of the assumption that I made about how much attention participants were paying to the experiments, those assumptions are violated." Let's try it again, let's try to boost the attention levels of participants and see if we can find the expected effect. According to Lakatos, we have a core theory, and around this core theory there's a protective belt of auxiliary hypotheses, these are assumptions that you make about the research process, and you can give up some of these assumptions or change them. In this sense, you can keep your core theory, even if your prediction doesn't pan out by changing some of the assumptions that you make. Now, of course, when you do this, you enter what's known as a degenerative research line. It's fine to make an ad hoc assumption saying, well, it's true but only if. But if you have to keep doing this and keep doing this, then your theory is not making new predictions. There are no new facts that are predicted with the theory that you make. You're just trying to change the theory every time when a failed prediction emerges. What you ideally want is to turn this into a progressive research line. A first failed test is perfectly fine. You can slightly change your theory or one of the assumptions you're making, but then this should lead to a progressive research line, there should be new facts that you can predict with your improved theory. When this is not the case, it's perfectly fine to keep trying but after a while, you enter a research line that's not giving anything new, which happened with astrology in the past. So according to Lakatos, something like astrology would not be a scientific discipline because it never turned into a progressive research line. They're only ad hoc changes to the theory every time when a failed prediction emerges. Now there are some people who doubt whether we can have an objective science altogether. So they say the quest for an objective truth is a nice try, but we'll never really reach it. One of them is Thomas Kuhn in his book The Structure of Scientific Revolutions. He says that, "Observations are always theory-laden, not objective." There's no such thing as an objective view at the world. We always look at the world with certain assumptions and these assumptions color the way that we do our research. He goes as far as to say, that, "The proponents of competing paradigms practice their trades in different worlds." What he means is that it's possible for researchers to have different theoretical backgrounds and they will look at the world in completely different ways that you cannot unify. He talks about how people practice puzzle-solving science, which is basically normal science as science should progress. And this is followed by a paradigm shift, a revolution in science, and after such a scientific revolution, we look at the world in a completely different way. And after a scientific revolution we cannot interpret the data that we collected before this paradigm shift. Things completely change. So there's no objective knowledge. Things always depend on the specific paradigm that you're in. According to Kuhn, science is not cumulative, but it's revolutionary, based on subjective reasons. These can be social norms or ideas that people have. So it's not that science will always just progress and become better and better, the idea that Lakatos is expressing in his progressive research lines, no instead, there's just a revolution every now and then. And we have a completely different paradigm within which we practice science. Modern view points on philosophy of science acknowledge that scientific knowledge is a social product. For example, Helen Longino talks about how we have no objective knowledge, but there is only intersubjective criticism, what constitutes the objectivity of science. So there's no objectivity outside of the social enterprise that science is. What's important, according to Longino, is public avenues for criticism, for example peer review, when you submit your paper for publication. What's important is that there is room for peers to criticize your finding or improve it if possible. There should be shared standards, we should have some idea of what we're doing. A tool set that we use to evaluate evidence. There should be open reevaluation. So if there's criticism and there are new insights, a scientific discipline should change. Again, this is not something that happens in astrology. There's enough criticism on astrology but it never really changed in the last decades. And finally, what's important for this social intersubjective objectivity is that there's equality within peers. If you have an informed researcher that knows what he or she is doing, then these people should have equality in their interaction. It should not be the case that one person can determine, this is how it's going to be, because I'm the most powerful individual here. In this lecture we briefly talked about philosophy of science and different perspectives that philosophers have on what makes something scientific. Personally I think, the idea of a progressive and a degenerative research line is quite useful. When you make a prediction that doesn't pan out, you're in trouble, you have to try to turn this into a progressive research line in the future. But overall, there's no clear, single answer on whether we can ever have objective knowledge. [MUSIC]