Since the 1950's, over 2,000 studies have been conducted on cognitive dissonance exploring things like who experiences it, why they experience it, how they experience it, what are the implications, and so forth. Although there are many different flavors of cognitive dissonance, most situations fall into one of two general categories. The first is "predecisional dissonance," in which dissonance or the possibility of dissonance influences the decisions people make. So, in this case dissonance comes before people make a decision. The other type is "postdecisional dissonance," in which dissonance (or again, the possibility of it), follows a choice that's already been made, and efforts to avoid or reduce this dissonance affect later judgments. So, in this case, dissonance comes after a decision. The power of predecisional dissonance was beautifully illustrated in a classic study by Jim Sherman and Larry Gorkin published in 1980. To understand how it worked, consider the following story. A father and his son are out driving. They are involved in an accident. The father is killed, and the son is in critical condition. The son is rushed to the hospital and prepared for the operation. The doctor comes in, sees the patient, and exclaims, "I can't operate; it's my son!" Question: Is this story possible? Most people, particularly before 1980 (when the study was published), would answer that it's not. Their reasoning would be that the patient can't be the doctor's son if the patient's father has been killed. At least, they would reason that way until it occurred to them that the surgeon might be the patient's mother. Now, if you live in a country where women can be surgeons, and this possibility hadn't occurred to you, and if you consider yourself a relatively non-sexist person, there's a good chance you're experiencing dissonance right now—in this case, an uncomfortable feeling that your behavior and beliefs are inconsistent. Moreover, the theory predicts that if you're experiencing cognitive dissonance, you'll be motivated to reduce that dissonance by behaving in a more non-sexist way than ever, to show yourself that you're in favor of equality between women and men. At least, that's what Sherman and Gorkin hypothesized. In their experiment, they randomly assigned 77 college students, male and female, to one of three conditions in an experiment on "the relationship between attitudes toward social issues and the ability to solve logical problems." In the sex role condition, students were given five minutes to explain how the story of the female surgeon made sense. In the non-sex role condition, students were given five minutes to solve an equally difficult problem concerning dots and lines. That way, Sherman and Gorkin could make sure that the results weren't simply due to students working on a difficult problem. And in the control condition, students were not given a problem to solve. In the sex-role and non-sex-role conditions, the experimenter provided the correct solution after five minutes had passed. Roughly 80% of the students were not able to solve the problems within five minutes. Next, the experimenter told students that the study was over, and she gave them booklets for another person's study about legal decisions. This wasn't a surprise, by the way. The students had been told previously that they'd be participating in a couple of unrelated research projects. The experimenter explained that the other researcher was out of town and that they should put their completed booklet into an envelope addressed to that person and drop the envelope in a nearby outbox. Then she left, seemingly never to see their answers to the second experiment. In reality, the second experiment was nothing more than a cover story to collect information on sexism without students detecting a connection to the first part of the experiment. There was no second experiment. Pretty tricky, pretty tricky. In this part of the study, students read about an affirmative action legal case in which a woman claimed that she had been turned down for a university faculty position on the basis of her gender. Students then gave their opinion about three things: First, what they thought the verdict should be. Second, how justified they thought the university was in hiring a man rather than the woman. And third, how they felt about affirmative action in general. What were the results? Sherman and Gorkin found that compared with students in the control group, and compared with students who were presented with the problem concerning dots and lines, people who had failed to solve the female surgeon problem were more likely to find the university guilty of sex discrimination, less likely to see the university as justified in hiring a male for the job, and more supportive of affirmative action policies in general. Based on these findings, Sherman and Gorkin concluded that students who failed the female surgeon problem tried to reduce their dissonance by acting as pro-equality as possible, trying to show themselves that they weren't biased against women. Let me share one other study on predecisional dissonance—an entertaining and startling experiment published by Ronald Comer and James Laird in an article entitled, "Choosing to Suffer as a Consequence of Expecting to Suffer: Why Do People Do It?" In the study, 50 college students were assigned to one of three experimental conditions: worm expectancy/worm choice, worm expectancy/shock choice, and neutral expectancy control group. I'll explain the right two columns in a few minutes, but first, let's just walk through the study. As soon as students entered the laboratory, they saw a table set up for the experimental tasks. On one side of the table, there were eight covered cups weighing different amounts for a weight discrimination task— you know, judging which cup is heavier. In the worm expectancy/shock choice condition, there was also an electric shock apparatus sitting on the table. And in another area, there was a plate with a dead worm on it, a cup of water, a napkin, and a fork for a worm eating task. And by the way, no worms were harmed in the making of this video— that's a candy gummy worm, not a real worm. The experimenter then read a statement reminding students that they were free to refuse or to terminate participating at any time. Three participants immediately pulled the ripcord and bailed out of the study. But the other 47 students remained, and a post-experimental interview indicated that they fully expected to eat a worm before the study was over. Students in all three experimental conditions began by completing a personality questionnaire that asked, among other things, how brave they thought they were and how much they deserved to suffer. Then the experimenter described each of the experimental tasks, seated students in front of the task that they were assigned, and left the room to take care of some preliminary details. Students in the neutral expectancy control group were assigned to the weight discrimination task, and students in the two worm expectancy groups were assigned to the worm eating task—that is, they expected to eat a worm. Ten minutes later, after students seated in front of the dead worm had been given lots of time to contemplate their fate, the experimenter returned and announced that there would be a short delay. The experimenter said, "While you're waiting, I was thinking, it's been quite a while since you filled out the pretask personality survey. I don't even know if it's valid anymore. Would you just fill it out again now?" In this way, the participants were led to provide a second self-rating of how brave they were and how much they deserved to suffer— so the researchers could see whether these ratings changed at all during the ten minutes or so that students had sat in front of their task. Then, a few minutes later, the experimenter returned and asked students to rate how pleasant or unpleasant the tasks were. Students simply gave their general impressions because they hadn't actually performed the assigned task. Students in the control condition and the worm choice condition evaluated the weight discrimination task and the worm eating task. And students in the shock choice condition evaluated the weight discrimination task and a shock task in which they would give themselves a shock on the hand. Finally, the experimenter shuffled through his papers again and declared that—whoops!—the correct experimental condition hadn't been assigned, and students would now be able to choose the task they wanted to complete. Students in the control condition or the worm choice condition were given a choice between doing the weight discrimination task or the worm eating task, and students in the shock choice condition were given a choice between the weight discrimination task and the shock task—that is, a task that involved the self-infliction of suffering but didn't have to do with worms. After students indicated their preference, they were fully debriefed, meaning that they were informed about the purpose of the experiment. You'll be happy to hear that none of the students were actually required to eat a worm, which may have come as a disappointment to some of them because, when given the choice, 12 of the 15 people assigned to the worm choice condition preferred to eat the worm. And 10 of the 20 people in the shock choice condition chose to give themselves painful electric shocks. That is, a large number of students chose to suffer through a disgusting or painful task rather than opting for the harmless weight discrimination task. In contrast, all 15 people in the control condition chose the weight discrimination task over the worm eating task. For people in the control condition, a typical response might be, "Well, you know, thank you for giving me the choice but I think that I'm just going to stick with the weight discrimination task. You know, I've been really trying to cut down on the number of worms that I eat, but thanks for thinking of me." Now, put on your dissonance hat for a second and answer the following pop-up question: Why would so many people choose to suffer? The purpose in asking you this question was simply to stimulate your thinking—it wasn't to see if you have the correct answer, because there isn't a single correct answer. What the researchers concluded is that most students who expected to eat a worm changed their beliefs in at least one of three ways: First, they convinced themselves that eating a worm wasn't that bad after all. They convinced themselves that they were brave. Or they convinced themselves that they deserved to suffer. By changing their beliefs this way, students were presumably able to reduce the dissonance caused by agreeing to suffer for no good reason. In commenting on how people will adapt their belief systems to fit their situation, even if the situation is assigned at random as in the case of an experiment, Comer and Laird wrote the following: "Although it is superficially striking to observe people choosing to eat a worm, the more impressive aspect of these findings is the degree to which people will change their conceptual system to make sense of the random events of their lives." One can only wonder whether the same sort of mechanism is involved in other self-destructive situations. For example, think about battered women who choose to stay with their boyfriend or their husband and tell themselves that they deserve to suffer or that they're brave, or that it's really not that bad. Well, that's predecisional dissonance because the dissonance is influencing later decisions— whether to stay with a boyfriend, whether to render a guilty verdict against a university, whether to eat a worm, and so on. What about postdecisional dissonance— how does that work? Well, one of the most straightforward demonstrations of postdecisional dissonance—really, about as vanilla as you can get—was published in the 1960s by Robert Knox and James Inkster. This was a very simple study in which they approached 141 horse betters at Exhibition Park Race Track in Vancouver, Canada—69 people who were about to place a $2 bet in the next 30 seconds, and 72 people who had just finished placing a $2 bet within the past 30 seconds. Knox and Inkster reasoned that people who had just committed themselves to a course of action by betting $2 would reduce any postdecisional dissonance by believing more strongly than ever that they had picked a winner. To test this hypothesis, Knox and Inkster asked people to rate their horse's chances of winning on a seven-point scale in which a 1 indicated that the chances were "slight" and a 7 indicated that the chances were "excellent." According to their results, people who were about to place a bet rated the chances that their horse would win at an average of 3.48, which corresponded to a "fair chance of winning." On the other hand, people who had just finished betting gave an average rating of 4.81, which corresponded to a "good chance of winning." So the hunch that Knox and Inkster had was right. In less than 60 seconds, a $2 commitment increased people's confidence that they had picked a winner. And the same finding holds when people vote. People enter the voting booth with a certain degree of confidence, but after pulling the lever or dropping a ballot into the box, they're more likely to believe that their candidate will win because they don't want to feel dissonance from the thought that they just threw away their vote on a loser. Two other notes before we end: One about how universal cognitive dissonance is (for example, does it occur in the East, as well as West) and the other about how useful it is to understand cognitive dissonance theory. On the first question, research suggests that cognitive dissonance does occur across the globe, but the form it takes is somewhat different country to country. Here in the West, we tend to feel dissonance from inconsistencies that might suggest we're incompetent, or bad in some way, whereas people in the East tend to be more concerned about choices and behaviors that could lead to social rejection— for example, bad choices made on behalf of other people. The second question has to dowith usefulness, and to address that question, I'd like to discuss a very interesting 2012 study published in the Proceedings of the National Academy of Sciences—a study on cheating. The Snapshot Quiz included the following item on this topic: "If you want to reduce cheating on things like tax returns or exams, would it be more effective to have people sign a declaration of honesty, an honor code, at the top or the bottom of the tax return or exam?" Let's pause so that you can see how you answered. What these researchers found is that people were much less likely to cheat if they signed a statement at the beginning of a tax form, which would make it dissonance-arousing to cheat on the questions that followed. Through a tricky procedure using a mock tax form, the researchers were able to determine that 63% of people in their particular study were honest when they signed a statement at the top that said: "I declare that I will carefully examine this return and that to the best of my knowledge and belief, it is correct and complete." That number, 63%, fell to 21% in another experimental condition when people signed an equivalent statement at the bottom of a tax form, after they had already answered the questions. And I should add that 36% of people were honest when not asked to sign a statement of any kind—significantly different from 63% but not from 21%. So, the bottom line is that if you want to reduce cheating, the signature shouldn't go in the bottom line. It shouldn't go at the end of an exam or a tax form that has already been completed. Yet this is exactly where taxpayers in the United States are currently asked to sign—at the bottom, where it has no effect on cheating. And that really illustrates the concluding point that I wanted to make— that knowing something about cognitive dissonance theory is useful. In the next video we'll look at another research area that's got all sorts of practical applications—the psychology of persuasion.