Welcome to lesson eight, Logic and Dialectical Reasoning. These two types of thinking had been around at least since the fifth century BC. They were developed in Greece, India and China at about the same time. Logic developed in Greece and India and a little bit later than dialectical reasoning and it remained a strong tradition in those countries. But quickly became unknown in China. So why should you learn formal logic? Because if you haven't had any you think you've probably been able to make pretty good inferences without that. I think there are three reasons. One is that an educated person should know something about this achievement of civilization. Secondly, it's necessary for science and mathematics. And interestingly, learning science and mathematics teaches you about logic and logics teaches you to do science and mathematics a little better. And third, it's useful to understand the difference between truth and validity of a conclusion or inference. An argument is valid if the truth of the premises logically guarantees the truth of the conclusion. For example, Premise 1, Jane either owns a Honda or a Toyota. Premise 2, Jane does not own a Honda. Therefore Jane owns a Toyota. A conclusion based on premises can be manifestly nonsensical. But still be valid if it follows the structure of a valid argument. For example, premise one, Jane owns. Jane either owns Russia or India. Premise two, Jane does not own Russia. Therefore, Jane owns India. That's a valid argument although it's complete nonsense. We sometimes accept a conclusion based on faulty logic. If we don't have a clear understanding of what the valid logical structures are. We're especially likely to do that if the conclusion is something we want to believe. That's not something you want to do. You don't want to believe something because you think it follows logically from other things you know if it doesn't. Also we sometimes reject proposition that's distasteful to us even though it follows from the premises and we have to admit the premises are true. That's also not something you want to do. Formal logic is a type of deductive reasoning, which is sometimes referred to as top down reasoning. Given the prior inputs, you must accept the conclusion. If you obey the rules, you're guaranteed a deductively valid conclusion. Whether the conclusion is true or not depends on the truth of the premises, the statements that precede your conclusion. Two kinds of deductive reasoning have received a lot of attention historically. One is the Syllogisms which consists of categories and quantifications. They typically have a major premise which is often a generalization of some kind. And a minor premise which typically refers to an example of the subject of the generalization. And a conclusion which follows from those premises. A classic one, most common one that people use for an example of syllogism is all A are B. C is an A, therefore C is a B. With content, also quite famous, all men are mortal, Socrates is a man, therefore Socrates is mortal. Has that same structure, it's a valid argument. But you can have a valid argument with that structure, which is, again, total nonsense. All autos are purple. Socrates is an auto. Therefore, Socrates is purple. That is a valid conclusion, although ridiculous. Just to check to make sure that I'm telling the truth. If that really is a valid argument. You can see that that has the structure here. Try this syllogism. Does it seem to be valid to you? All people on welfare are poor. Some poor people are dishonest. Therefore some people on welfare are dishonest. Let's see what the structure of that argument is. All A are B. Some B are C. Therefore some A are C. Well, that does not follow, that conclusion isn't dictated by the premises. So there are dozens of forms of syllogisms that have been described over the ages. All A are B. Some C are A. Therefore, some C are B. Valid or not? I don't find many of them useful. Amazingly, for centuries, syllogism Syllogisms comprised a large fraction of higher education in Europe and subsequently in America. They're not [LAUGH] most people get through their college careers without ever seeing them, and I think they don't suffer much from that. The most useful formalisms for categorical reasoning are so-called Venn diagrams. They are named after the 19 century logician John Venn. They are pictorial of representing category membership. Category memberships that are very difficult to understand verbally, can be a snap in picture form. So, the picture on the left above captures a particular syllogism that we do use in everyday life. Some, but not all, A are B. Some but not all, B are A. So some but not all small furry animals have a bill like a duck. Some but not all duck-billed animals are small and furry. That's the intersection of A and B, where those two circles meet. And that intersection, of course, refers to the duck-billed platypus. Which so far as I know, is the only duck-billed animal that's small and furry, or small and furry animal that's duck-billed. It also could be used to describe a situation, for example in which some but not all of the students who are English speakers at an international school also speak French. And some but not all French speakers also speak English. The students who speak only English have to study with Ms. Smith. Students who speak only French have to study with Monsieur Poirot. And the intersection represents most students who speak both languages and talk can therefore study with either teacher. The picture at the top right shows the complicated but not uncommon situation in which some A or B, some B or A, some A or C, some C or A, some B are C and some C are B. And here's an actual example of just that kind of Venn diagram. The circle on the left shows all of the letters which appear in the Greek alphabet. The circle on the top right refers to all of the letters which appear in the Latin alphabet. And the circle on the bottom shows those that are found only in the Russian alphabet. This intersection shows letters which are found both in the Greek alphabet and the Latin alphabet. This intersection shows the letters which are found both in the Greek and Russian alphabet. And this intersection shows the single letter which is found in both the Latin alphabet and the Russian alphabet. And you wouldn't necessarily have known that there are those particular ones at this intersection, which are found in all three alphabets. I challenge you to generate those intersections without drawing a Venn diagram. At any rate, I'm sure I would end up with alphabet soup. Other major formal reasoning schemes includes propositional logic. This is somewhat more recent in intellectual history and much, much more important than syllogisms. It tells us how to reach valid conclusions from premises. For example, the logic of the conditional. The conditions for a thing to be true, are either necessary or sufficient. Sometimes one thing is necessary in order for another thing to be true. And sometimes one thing is sufficient in order for another thing to be true. They're not the same thing, by any means. The most common form of conditional logic is low disponents. Which is, you've heard before, if P then Q, P, therefore Q. If P is the case, then Q is the case, P is the case. Therefore Q is the case. If it snows, schools will close. It snowed therefore school will be closed. So P is a sufficient condition for Q. It's not a necessary condition because schools could be closed for other reasons. There might be Tornado suspected, or an Earthquake. Sometimes a given thing is both necessary and sufficient in order for another thing to be true. For example, being a male sibling is a necessary and sufficient condition to be a brother. Being a male sibling is sufficient for being a brother. All you need. And being a male sibling is necessary for being a brother. There's no other way to be a brother except by being a male sibling. Just as for syllogisms propositional logic has to map onto a cogent argument form in order to be able to yield a valid conclusion. For the conclusion to be true as well as valid, it's also necessary for the premises to be true. The nice thing about premises that are true and form of valid argument, is that you have to believe their conclusion. Like it or not. Unless you have other information from the outside the argument that contradicts the conclusion. The decide for each of the following arguments each having two premises and conclusion whether it's valid or not. Premise one if he died of cancer, he had a malignant tumor. Premise two, he had a malignant tumor. Therefore he died of cancer. Please stop the presentation til you've decided whether the argument is valid, and then pick it up with argument B. Premise one. If he died of cancer, he had a malignant tumor. Premise two, he didn't die of cancer. Therefore, he didn't have a malignant tumor. Again stop the presentation til you've decided whether the argument is valid. Argument C if he died of cancer, he had a malignant tumor. He dies of cancer, therefore he had a malignant tumor. Only argument C is valid. It maps on to modest ponents. If P then Q, P therefore Q. If cancer then tumor, cancer, therefore tumor. The plausibility of the conclusions in arguments A and B temp us to say that they're valid. Argument A however maps onto an invalid argument form. If he died of cancer, he had a malignant tumor. He had a malignant tumor therefore he died of cancer. That's called the converse error because the form of reason erroneously converts the premise if P then Q and to. If Q then P. If the premise had been if Q then P, if tumor then cancer. Then we know, in fact, that since Q is the case, P is also the case, tumor therefore cancer. We make converse errors frequently if we're not monitoring for logical validity. Have a look at this converse error. If the car is not in our garage, then Jane went downtown. Jennifer told me she saw Jane downtown, therefore the car won't be in the garage. Making that error is more likely given some kinds of background knowledge than others. If Jane rarely goes anywhere without the car, then the error is likely. If she sometimes takes the bus and is sometimes driven by a friend, we're less likely to make the error. Argument B was if the person died of cancer, then he had a malignant tumor. If he didn't die of cancer, therefore he didn't have a malignant tumor. This is called the inverse error. We invert the premise if P, then Q, into if not P, then not Q. We make this error a lot, too. Here's an example. If it's raining then the streets must be wet. It's not raining. Therefore, the streets must not be wet. If we live in a city where the street sweepers operate frequently, thereby, making the streets wet. Or if it's a blazing hot summer day in a city where the hydrants are sometimes opened to cool kids off we're less likely to make the error. If we live in rural Arizona with no streets sweepers and no hydrants we're more likely to make the error. An interesting and important thing to know about the converse and inverse errors is that they only yield deductively invalid conclusions. They can actually be pretty good inductive conclusions, that is if the premises are true, the conclusion is more likely to be true. It's more likely the car won't be in a garage if I know Jane is downtown. If it's not raining, it's more likely the streets won't be wet. Inductive reasoning is a bottom up logical process in which multiple premises, all believed to be true. Or at least true most of the time, are combined to obtain a conclusion which is deemed probably true. For example, we observe facts and reach a general conclusion about facts of their particular kind. I observe hundreds of salons, they're all white, therefore, all salons are white. Except, of course, that that perfectly reasonable inductive conclusion turned out to be wrong. Inductive reasoning is often used for prediction or forecasting and scientific reasoning nearly always involved inductive reasoning. In fact, scientific reasoning is usually completely dependent on inductive argument forms. All types of inductive reasoning discussed in this course were all large numbers, repressions, statistical significance, and so on, are inductively valid. Getting a larger sample will probably come closer to the population value that if we get fewer. Larger numbers are more likely to get us a right answer but there's no guarantee. What we've covered in this lesson about deductive reasoning is a tiny fraction of the world of logic. At my university I'm sure there are half a dozen courses in logic that build on the principles that we've been talking about. But the concepts that we've been talking about today are the most useful ones, I believe. And those concepts are deductive versus inductive reasoning. Top down versus bottom up reasoning. Syllogisms, which have to do with categories and quantification. Conditional logic, if then statements, for example modus ponens, if P then Q, P, therefore Q. The concepts of necessity and sufficiency. For example, P is a sufficient but not necessary condition for Q. And two types of errors the converse error, converting if P then Q into if Q then P. And the inverse error, altering if P then Q to if not P then not Q. In the then next segment, we'll be talking about a very different kind of reasoning. Not so formal and structured as logical reasoning. And as a matter of fact, opposed in some ways to logical reasoning.