Welcome back. This lecture builds directly on the last one, and is one of the most complicated and theoretical of the course. It's all about twists and turns when it comes to how people explain behavior. So, if you have trouble following any part of it, don't lose hope. What I would suggest is you slow down the video playback speed, consider watching the video a second time, or use the discussion forums to post any questions that you have. Also, next week's readings will cover attribution theory, including some of the research in this lecture. So again, don't lose hope, stay with it, and if English isn't your first language, let me simply remind you that we have a language help button on our course page. So, in the last lecture we discussed Harold Kelley's formulation of attribution theory. Remember, he proposed that people typically explain behavior in terms of the person, entity, or time, and that they base these attributions on three sources of information: consensus, distinctiveness, and consistency. For the most part, studies have supported this framework, but there's one major exception: people don't always pay attention to consensus information when they make causal attributions. Some of the first researchers to document this problem were Dick Nisbett and Gene Borgida, two social psychologists who argued that when it comes to making attributions, there's lots of evidence that people use distinctiveness information and consistency information, but much less evidence that they take consensus information into account. Nisbett and Borgida then conducted a very clever study that showed the degree to which people sometimes ignore consensus information. They told the participants in their study about two previously conducted psychology experiments, both of which had yielded surprising results. In one of the experiments, 32 out of 34 people willingly received electric shocks in an experiment supposedly on skin sensitivity. And in the other experiment, 11 of 15 people failed to help someone who appeared to be having a seizure until the person began choking, and 6 of 15 never helped the other person at all. These are actual results. Now, some of Nisbett and Borgida's participants were given the same consensus information that I just gave you—that is, 32 out of 34 people behaved in a certain way. That was the consensus behavior in that situation. Others were simply told about the experimental procedure—they weren't give any consensus information at all. Then, Nisbett and Borgida asked their participants several questions, including two key items: First, on a seven-point scale, would you say that the behavior of a particular subject, Bill, in the shock experiment, who willingly received the most extreme shock, or Greg in the seizure experiment, who failed to help at all, was due to that individual's personality or to the situation? And second, how would you have behaved if you had been a participant in the experiment? What Nisbett and Borgida found is that giving people consensus information made no significant difference. Even when people knew that the majority of participants in the original experiments had received a shock or failed to help, they made dispositional attributions for Bill and for Greg— that is, attributions based on the person's character or disposition. Consensus information also failed to affect judgments of how people thought they would have acted had they been in the original studies. Telling people that 32 out of 34 participants behaved a certain way made no significant difference in how people thought they would behave. And the same pattern of results was generated by our class in the Snapshot Quiz, which asked the following: If you saw someone having a seizure (for example, someone fall to the floor and begin shaking uncontrollably), what would you do? Over 90% of you answered that you'd probably or definitely help, and I have no doubt that many of you would. Let's pause so that you can see your answer. Where the results get interesting is in the next Snapshot Quiz item, which gave you consensus information that nearly three out of four people did not help in this situation. Just as Nisbett and Borgida found, this extra information made relatively little difference. The overwhelming majority of students in our class said, in effect, "Well, I would have helped even if other people didn't." Again, let's pause so that you can see your answer. Now, of course, the pattern that I'm describing doesn't fit everyone, and it's very possible that you're in the minority who would actually help, but people oftentimes show what this week's assigned reading calls the "false uniqueness effect"—a false belief that when it comes to our good deeds and other desirable behaviors, we're more unique than we really are, a false belief in which we see ourselves as a cut above the pack, which, of course, not all members of the pack can be. Since the time of Nisbett and Borgida's research, studies have found that people do pay attention to consensus information in some instances, but surprisingly often, knowledge about what other people do has relatively little effect on causal attributions. In fact, the tendency to underestimate the impact of situational factors and overestimate the role of dispositional factors—factors unique to the individual— is known in social psychology as the "fundamental attribution error." The fundamental attribution error is a true error, not simply a bias or a difference in perspective, because people are explaining behavior in terms of an individual's disposition even when you can demonstrate that the person's disposition had nothing to do with why the behavior occurred. For example, in one of the earliest studies published on this topic, people were presented with an essay written by someone who was either forced to defend a politically unpopular position or someone who was described as having free choice in selecting a position, and even when people were told directly that the essay's author was roped into taking an unpopular position, they tended to attribute that position to the author. In other words, when the experimenter said, "We asked the author to take this particular position in the essay, but we'd like you to guess what the author really believes," people tended to rate the author as actually believing what was written in the essay. The social psychologist who coined the term "fundamental attribution error" is Lee Ross. And I thought you might be interested in seeing a brief video clip of him talking about the error and its implications. >> Now, the fundamental attribution error, therefore, really relates in an intimate way to the central task of psychology. Psychology is interested in sort of teasing apart the role of personality and the role of the situation, but every individual layperson in their everyday life is trying to do exactly the same thing when they see behavior. They're trying to say, why did the actor do it? What do I learn about the actor? What implications might it have for the way other people would behave? The truth really is that the fundamental attribution error relates to the fundamental mission of social psychology, as I said, which has to do with appreciating the power of the situation. And most of our most famous classic experiments, the Asch experiment and the Milgram experiment, and all the other things that students typically learn in introductory psych classes, really depend on this error. It's the fact that we think behavior is controlled by stable traits or dispositions that makes us very surprised when we see that a social psychologist who cleverly arranges the situation can get people to be highly obedient, or highly altruistic, or highly conforming or even highly destructive and aggressive in their behavior. It sort of shocks us, and the reason it shocks is that we haven't given adequate weight to exactly the feature of the situation that's responsible for the actor's behavior. >> For those of you who read the amazing article by David Rosenhan, "On Being Sane in Insane Places," you can see how the fundamental attribution error might operate in psychiatric settings. When the pseudopatients, for example, got bored and began to pace back and forth, they were seen as emotionally disturbed, as having a dispositional problem. Or when they took notes as part of the investigation, their behavior was interpreted as evidence of an underlying mental illness: "Patient engages in writing behavior," something along those lines. One question over the years that researchers have asked about the fundamental attribution error is how fundamental it actually is. For instance, does it occur just as often in the East as it does in the West? The answer here appears to be "No." When situational factors are fairly obvious, East Asians are much less likely than Westerners to commit the fundamental attribution error. So, to the extent that the error is truly fundamental, it's much more the case here in the West than it is in the East. Compared to the East, Western cultures focus more on rugged individualism, on the self-made person rather than the group or the situation. Now, it's important not to confuse the fundamental attribution error with a closely related phenomenon known as actor-observer differences in attribution, so I want to talk to you about that for a little bit. And by the way, in social psychology, an "actor" is simply someone who takes an action—it's not somebody in the movies. The classic finding here is that actors are more likely to explain their behavior as a function of situational factors than are observers (that is, people watching the actor behave). Unlike the fundamental attribution error, which is truly an error, the actor-observer difference in attribution is simply a difference, a bias in viewpoint; there's not necessarily a right or wrong answer. You say that you're late to work because the traffic was bad (a situational attribution), but your boss says that it's because you're unreliable (a dispositional attribution). That's an actor-observer difference in attribution. When are these differences most likely to occur? Well, a huge meta-analysis of 173 different studies found that actors do downplay dispositional explanations for their behavior, but mainly when their behavior or the outcome is negative. For example, when people fail an exam or crash a car, they're less likely than observers to attribute the negative outcome to their ability level or to other personal characteristics. In contrast, if the behavior or event is positive, this difference often reverses, with people attributing their success to dispositional factors. In other words, contrary to the classic formulation, the meta-analysis found that to the extent actor-observer differences exist, they're often self-serving biases that can cut in either direction, with actors avoiding dispositional attributions when the outcome is negative, but not when the outcome is positive. So, one reason—quite possibly, the main reason—for actor-observer differences in attribution is that people don't want to look bad, either to themselves or to outside observers. But some researchers have argued that there's another ingredient that might be fueling actor-observer differences— something that we covered in the last video: salience. To actors, especially actors explaining their role in a negative outcome, the most salient thing is often the situational obstacles that they faced. So, you'd expect actors to either view the situation as relatively causal or at least not focus heavily on their own disposition. But to observers, the most salient thing is typically the actor—the person they're observing, so you'd expect observers to explain behavior in terms of the actor's personal characteristics. Of course, to the extent that observers blame the actor when things go wrong and actors blame the situation, there's a potential for conflict, so it would be great if there were a way to minimize these differences, and it turns out that in fact, there is. Through a little psychological judo, the relationship between salience and causal attribution can actually be used to reverse the classic actor-observer difference in attribution. The person who figured this out was Michael Storms, who published a study on this issue 40-some years ago. But before I describe the study, I should also warn you that the findings and the role of salience itself in generating attributional differences are still being debated by researchers, several of whom I consulted before taping this lecture. So, the best I can say is that this area of attribution research is a work in progress. Anyhow, the procedure that Michael Storms used was simple. The study involved 30 sessions, each with four participants randomly assigned to play a particular role: two actors who held a five-minute get-acquainted conversation that the experimenter videotaped, and two off-camera observers, who watched the actors have their conversation. During these conversations, two videotapes were made. One shot from Actor 1's perspective looking at Actor 2, and another from Actor 2's perspective, looking at Actor 1. Here's a rough sense of what the experimental procedure looked like. The two people in the center are the actors, and you can see an observer and a video camera facing each actor. After the get-acquainted conversation, participants were randomly assigned to one of three conditions. In the same orientation condition, observers viewed a videotape of the same actor they had been watching, and actors viewed a videotape of their conversation partner. So, basically it was just like a rerun from the same perspective. Here, for example, is what the actor on the left would be shown. In the new orientation condition, observers viewed a videotape of the actor they had not been watching, and actors viewed a videotape of themselves, thereby reversing their visual orientation. And in the no-videotape condition, participants didn't watch a videotape of the conversation. What the study found is that when people were asked to explain the actor's behavior, participants who either watched a videotape from the same orientation or watched no videotape at all displayed the classic actor-observer difference in attribution, but participants who viewed a videotape shot from the opposite perspective showed a reversal of the classic difference. In other words, when actors watched themselves, they tended to make dispositional attributions for their behavior, and when observers watched the conversation from the actor's point of view, they tended to make situational attributions for the actor's behavior. So, once again we have a demonstration that visual orientation can affect how people explain behavior, including their own behavior. In the next video, we'll move from explanations of behavior to behavior itself—specifically, the question of whether attitudes and behavior are closely connected to each other— a topic of research that's turned up some surprising results. First, though, let's end with a pop-up question. Abracadabra!