[MUSIC] So now we've looked at a couple of different ways we can get intellectual arrogance. And now we're going to focus on those that are what we called knowledge driven effects. And these are often illusions that arise from [COUGH] misattributing something you do know to thinking that it means you know something else. I'll illustrate that with the illusion of explanatory depth and I'll illustrate with illusions related to reductionist biases, and finally illusions related to the outsourced mind. So let's start with illusion of explanatory depth. It this simply the sense that one understands causally complex phenomena more deeply that's to say with greater precision and depth than one really does. And it's measured by feelings of surprise at the shallowness of one's own best explanation or at one's inability to explain. Now, how do we test this? The way we do it, is we train people up in what it means to have nearly complete understanding of how something works, medium level of understanding, or very weak understanding. This is one of the training items we use, which was to illustrate a crossbow. Which said, if you had a Level 7 understanding of a crossbow, which is the top panel, you could, basically, build one from scratch, if you had the raw materials and the machinist at your side. You'd know all the details of how to make a really effective strong crossbow. The middle panel, which is a Level 4 understanding, means you have some sense of what makes crossbows distinctive. The trigger catch, the stronger bow, various other factors that make it more powerful. But you don't have all the details you have at Level 7. And finally, the Level 1 calibration, which would be scoring yourself as Level 1 in understanding the crossbow, means you know it's something that shoots bows and arrows. But you really know anything functionally different between it and the classic bow and arrow. So what we did is we trained people upon this kind of understanding scale. We got them to show that they get it and they could rate seven, four, one, etc., for various bits of self-knowledge. Then, we have them a large list of new items to rate which say, how well do you understand each of these following items on the same kind of seven point scale? How a helicopter flies? How a zipper works? How a flushed toilet works? How a cylinder lock works? And so on through a large list of items. We had them do this quickly thinking that if we had them do it quickly, they won't have time to self test their real knowledge, it'll just give us a good impression of how well they understand it. Then, having given their initial rating, we'll call that the T1 rating. We would then ask them to explain four such items in particular. So we'd say, you rated your understanding of how a helicopter works. Now, write out an explanation of how a helicopter actually works. As a result of having written out that explanation, we'd like you now to reassess your knowledge. So were you accurate? Did you want to raise it even higher? Were you even better than you thought? Or were you worse than you thought? And so, they rewrite their knowledge we call that the T2 rating. Then, we ask them a critical diagnostic question, one to get to the heart of whether good understanding how the various system works such as the helicopter. For example, in the helicopter case we'd asked them, well, how does a helicopter go from hovering to flying forward? And we find that in many cases this completely stumps our people. And [COUGH] then, as a result of rewriting their knowledge often drop it. That'd be at the T3 rating. Then, we show them a full, concise but quite detailed expert explanation of how a helicopter works. We have them read it carefully. We said, given that this is what is really a good explanation of how a helicopter works, what do you think your initial understanding was, how close was it to this? Was it still a seven or six like you said or do you want to make it lower or perhaps even higher? That would be the T4 rating. Finally, having done all of that, we then say, please study this explanation carefully and re-rate how well you think you know how helicopters work, in light of having read this explanation. We did that last item for the reason that we wanted to show that we didn't get these people so discouraged they couldn't rate their knowledge as being high. And the typical results look like shown in this graph. Focus your attention initially on the dark blue line, the upper line. The T1 rating is often quite high. Let's say this is a helicopter. They're thinking they know quite a great deal about how helicopters work. The T2 rating drops quite a bit which means that after having tried to explain how helicopter works, they're surprised at how little they know. The critical question, which is T3, causes a further drop down so now when they're asked about how a helicopter goes from hovering to flying forward, they drop further. They stay low when we ask them to compare what they thought they knew to an expert explanation, but then we shows that they can jump right up at the end. So this was our initial result in this task and we've replicated a number of times, but one thing that emerged in our studies was that depending on the background and expertise of our participants, the illusion was often stronger. So the top panel was a study initially done with grad students. We did it with undergraduates, many of whom thought that they would be much more humble, they were in fact more arrogant so their drop is further. Their initial ratings just as high, but their actual drop goes further down. Now, it's important to understand that the illusion of explanatory depth is not the same as other illusions of knowing. We find a strong illusion for ratings of how one understands devices and natural phenomena. Those are the two bottom graphs where there's a strong drop. But if you have people rate their knowledge of procedures, facts or narratives, there's very little drop, sometimes none at all. So for example, if you say, how well do you know how to make international phone calls? People are actually fairly calibrated whether they know how to do it or not. And their rating are not that far off. Or if you ask them how well they know certain facts such as the capital of Tasmania, they tend to be fairly well calibrated. Finally, even narratives which have a richer structure, people tend to be better calibrated. If you say, how well do know the plot of a well known movie such as Avatar? [COUGH] They're pretty accurate in their self-assessment how well they know the plot. So this illusion of knowing, this illusion of explanatory depth, really is especially powerful for (an) explanatory kind of knowledge, not for all kinds of knowledge. It's weak or not even present for facts and procedures and narratives. It holds for political explanations, so understanding how well one can make political arguments or explain political facts it holds. It also holds somewhat for evaluations of what others know, so it's not just self-enhancement. And it increases as people get younger. So you can do the same task and ratings of knowledge with young children, and the illusion is even stronger. We want to caution that it's not the same as general over-confidence, because the fact that it's much less for facts, procedures, or narratives, suggests that something different's going on. So why do we have the illusion of explanatory depth? Well, part of the reason has to do with knowledge misattribution effects. What we can call one particular reason for the effect may have to do with a function for mechanism confusion. People often understand how to use things, and they think that's the same as knowing how it works. So you might be pretty good at using a cell phone, and think your facility at using a cell phone somehow is indicative of your knowledge of how the cell phone works, when in fact, that's not appropriate at all. This may be related to another form of intellectual arrogance that rises from what is called, by David Dunning, reach around effects. And this is the idea that in a domain, you might think you have a lot of knowledge or perhaps you're a member of a certain political party and you think you know all the positions for your party about global warming or immigration and the like. [COUGH] And because you think you're really gifted in knowing your party's platform, if I ask how well you understand another issue, that is allegedly your platform but it's just a made up issue, you might think you know that as well, because you reach around from your known knowledge to other knowledge that you don't really have, and falsely infer you have it, as well. There are even further reasons why the illusion of explanatory depth is so strong. One has to do with the relative difficulty of self-testing. This is the idea that we rarely give explanations and we rarely hear full explanations. For that reason, there's relatively little data about how good we are and how good others are at giving them to us. Explanations often have virtually unbounded depth. You can go deeper and deeper and deeper and so we often rarely get a full explanation, a sense of its completeness or its quality but you often hear things like facts or procedures being described in full. And finally, there may be something called the entity present bias which is the mistaken inference that one knows how something works completely because one can figure out how it works when it's in front of them. For example, if I gave you a simple mechanical stapler, and I asked you to explain to me, how does that stapler work? You might actually do a great job with the stapler in front of you. You could talk about the springs, the push mechanism, the staples themselves and how they're advanced to the stapling location and the like. But if I asked you to draw me out a diagram of how a stapler works with none present, you might miss out all sorts of critical features. And not be nearly as competent at knowing how it works as you think. So people often confuse the ability to decipher or figure out something on the fly with the object or entity present with having all of that information in one's head.