When it comes to intergroup bias, you might be wondering: why are we still dealing with prejudice, stereotyping, and discrimination in this day and age? Haven't we had enough time to work these things out? In the United States, it was roughly half a century ago that President Johnson signed the Civil Rights Act of 1964 and spoke of closing the springs of racial poison. Here's a clip. >> I urge every American to join in this effort, to bring justice and hope to all our people, and to bring peace to our land. My fellow citizens, we have come now to a time of testing. We must not fail. Let us close the springs of racial poison. Let us pray for wise and understanding hearts. Let us lay aside irrelevant differences and make our nation whole. >> Clearly, in many ways, the United States is more whole than it was back then, as the election and reelection of Barack Obama suggests. But in the United States and many other countries, the springs of racial poison are far from closed, and other intergroup biases persist as well. Why should that be, after all these years? Well, one reason is that intergroup biases are attached to some relatively slow moving institutions and larger systems, like culture, law, and economics. For example, to the extent that neighborhoods and schools remain segregated, it's hard to reduce prejudice and stereotyping beyond a certain point. You really need positive intergroup contact. But there's another reason, and that is that intergroup biases don't always feel like they're biases, either because they operate outside our awareness or they don't seem like they're harming anyone, so they are very slow to change. In this video, I'll mention just three examples. First, the outgroup homogeneity bias. Second, positive stereotypes and benevolent forms of prejudice. And finally, ingroup favoritism and affinity. Let's begin with a discussion of outgroup homogeneity bias, which is a well documented tendency for people to see outgroup members as more alike from one to the next, more homogeneous than ingroup members. For example, in the United States Democrats might see more variation within their political party than within the Republican party. They'd say, "Democrats come from all walks of life, whereas Republicans tend to be more similar from one to the next." But Republicans might say just the opposite. Now, you could be thinking, "Well, of course they see more variation in their own group—they know more members of the ingroup than the outgroup." And certainly, that factor can play a role, but there's more to it than that. For one thing, some studies have found that the outgroup homogeneity effect is not related to the number of people you know in each group. Also, the bias takes place with groups that have quite a bit of contact with each other, like females and males. How many times have you heard women say, >> "Oh, you men are all alike." >> And how many times have you heard men say, >> "You women are all alike." >> That's the outgroup homogeneity bias. Now, why does it matter? Well, in large part, it matters because reducing members of an outgroup to a single identity is just one step away from stereotyping them. So, perceiving an outgroup as homogeneous might not feel like prejudice, but it can lead to stereotyping and discrimination. The good news is that if you flip things around and get people to think about differences among outgroup members, prejudice and discrimination can actually be reduced. This finding was reported in 2011 by researchers in France who conducted a study in which students were told that they'd be participating in a pair of unrelated experiments, one on memory, and the other on hiring decisions, when in reality, they were participating in a two-phase experiment on prejudice. In the memory phase, students were randomly assigned to one of three conditions. In the first condition, students were shown eight photos of Arabs, and asked to write sentences about how the individuals were similar to each other, supposedly to prepare for a later memory test on the faces that were shown. Students were also asked to do the same thing with eight abstract paintings. In a second condition, students were shown the same photos and asked to write sentences about how the individuals differed from each other, and after that to again do the same thing with eight abstract paintings. And in the control condition, students weren't asked to write any sentences; they were simply shown the Arab individuals and the abstract paintings. The researchers were kind enough to send me photos so that our class can see the sort of images that were used in the study. As you can see, the images present Arab people who differed in gender, age, and style of clothing. Then, in what seemed to be a second experiment, students were asked by a different researcher to evaluate four job candidates for a sales position. Roughly half the time, the strongest candidate's CV had a male French name, and roughly half the time, the same CV had a male Arab name. The results showed that students who wrote about Arab homogeneity or students in the control condition who didn't write any sentences later discriminated against the Arab candidate, whereas students who wrote about the variability of Arabs showed no job discrimination at all—that is, it didn't matter whether the candidate had a French name or an Arab name. So even though perceptions of variability might not seem like much of a bias, we can use our knowledge of the outgroup homogeneity bias to reduce discrimination, by deliberately focusing on the variability of outgroup members. Now, you might ask, is the outgroup always seen as more homogeneous? And the answer is no. Meta-analyses suggest that the effect is strongest when the ingroup is relatively large, and when the ingroup and outgroup are enduring, real-life groups, not simply temporary groups created artificially in a laboratory. If the ingroup is fairly small and the attributes in question are important to its identity or stereotypically associated with the group, then the outgroup homogeneity effect may disappear or even reverse. For example, in 2011 a team of British researchers found that female nurses, who are in the majority, tend to see male nurses as more homogeneous than female nurses, but male nurses, on the other hand, show an ingroup homogeneity effect, meaning that they see members of their own group as more homogeneous. Likewise, the study found that male police officers show the traditional outgroup homogeneity effect towards female officers, but female officers show an ingroup homogeneity effect. Let's pause for a pop-up question, just to make sure that these findings are clear. So, the bottom line is that in some cases there's actually an ingroup homogeneous bias, but more often people tend to see outgroups as relatively homogeneous— a bias that can lead to stereotyping even though it doesn't always seem that we're behaving in a biased way. As I mentioned earlier, a second way that intergroup biases can occur without it seeming like anything's wrong is through positive stereotypes and benevolent forms of prejudice. What do I mean by "positive" stereotypes? Well, for example, in the the United States, Asian people are often stereotyped as being good at math. Hispanic people are stereotyped as being family-oriented. African-American men are often stereotyped as having athletic ability. And women are stereotyped as being more nurturing than men. All of these stereotypes feel like they're compliments, and all have a kernel of truth in which the stereotype accurately describes some members of the group, but they're all overgeneralizations that reduce a diverse collection of people to a single type—to a stereotype. One of the most interesting ways that this dynamic plays out is with respect to sexism. Social psychologists Peter Glick and Susan Fiske have proposed that, in addition to hostile forms of sexism that show contempt toward women, there's also such a thing as benevolent sexism. Here's what they say: "Some forms of sexism are, for the perpetrator, subjectively benevolent, characterizing women as pure creatures who ought to be protected, supported, and adored… This idealization of women simultaneously implies that they are weak and best suited for conventional gender roles; being put on a pedestal is confining, yet the man who places a woman there is likely to interpret this as cherishing, rather than restricting, her (and many women may agree)." In other words, unlike forms of prejudice in which the target group is hated, benevolent sexism can take place even when the target of prejudice is loved, and because of that, women themselves may embrace, or at least tolerate, benevolent sexism. To illustrate what benevolent sexism looks like, I'd like to share with you two vintage TV commercials that aired sometime around the 1960s. The first is a Goodyear tire advertisement that presents women as vulnerable and in need of a man's protection. >> This flat tire needs a man. But, when there's no man around, when there's no man around, Goodyear should be. Why? Watch this. New Goodyear Double Eagle, carries its own spare inside. Lifeguard safety spare. A tire in a tire. Keeps on going. Next time, give her a second chance. >> So the message is that women need the protection of either a man, or a lifeguard safety feature— that women need to be guarded and protected. In the next TV commercial, one woman gives another advice on how to make good coffee for her husband—a traditional gender role. And when the husband compliments his wife on the coffee, she looks like she's just won the lottery. >> Folgers is different. They blend it special. And Folgers is mountain grown coffee. >> Mountain grown? >> That's the richest kind. You try it. >> Your coffee, Sir. >> Oh, thanks Honey. >> You're welcome. >> It's great, Honey! How can such a pretty wife make such great coffee? >> I heard that. >> Try Folger's, the mountain grown coffee— mountain grown for better flavor. >> Of course, things have improved since the time these commercials were made, but benevolent sexism is still very much with us. Here, for example, is an advertisement that ran in the February 2013 issue of Woman's Day magazine in which a woman is not only shown in a traditional gender role, but in fact, her apron is more like a costume or a uniform than a functional accessory, because she's simply boiling a plastic bag for ten minutes—not something that you need an apron for. What these examples suggest is that you don't need to dislike a target group—in this case women—in order for one group to be treated as lower in status than another group. And this point fits perfectly with one last example of how intergroup bias can fly under the radar without being detected. As I mentioned in the previous lecture, discrimination and bias often have less to do with attitudes toward the outgroup than with ingroup favoritism— that is, preferring the ingroup over the outgroup, which doesn't always feel like intergroup bias. After all, we might treat our friends and family members with more consideration than we would strangers, but does that mean that we're biased against strangers? I think most of us would say no, but the net effect can still disadvantage the outgroup. In fact, the Social Psychology Network partner site UnderstandingPrejudice.org, has an interactive demonstration that shows how small preferences to be with ingroup members can lead to surprisingly strong patterns of segregation. To take just one example, here's a board with two groups, represented by green and blue tokens, randomly distributed on a 10-by-10 board. Suppose that each green token wants to have at least 3 green neighbors in adjacent squares, and each blue token wants to have at least 3 blue neighbors— a very modest preference to be with ingroup members. No one's asking here to be in an all-green neighborhood or even in a majority green neighborhood. Each token that doesn't have its preference satisfied has a red X. And when you place your cursor over the tokens with the red X, gold squares show all the places where the token might be moved to become happy— that is, to have its preference satisfied. I'll move just a few pieces and you can see the red Xs disappear. But what's the end result if we keep doing this? To save time, I'll click "AutoComplete," and you can see how segregated the community becomes. So, very minor preferences at the individual level can have powerful consequences at the group level. Fortunately, it's also the case that a preference to be near even one outgroup member is enough to reverse the effect and reduce segregation. For those of you interested in seeing how that dynamic works, let me invite you to visit UnderstandingPrejudice.org and take just five or ten minutes to go through the segregation demonstration. Meanwhile, I'd like to share one other example of how ingroup favoritism can have much the same effect as prejudice even when there are no negative feelings toward the outgroup. In 2005, a team of British psychologists lead by Mark Levine studied prejudice in an experiment modeled after the Good Samaritan study that we discussed earlier in the course. In Levine's study, British fans of a particular soccer team answered a couple of surveys that asked them about their home team, and then they had to walk between the psychology department and another building to view a videotape. Here's an aerial view of Lancaster University, where the study took place. The psychology department was here, and each participant followed this path, at which point a jogger cut across this grassy area, tripped about 15 feet away from the participant, landed on the ground, and shouted in pain while holding his ankle. In reality, the jogger was a confederate, and the main dependent variable was whether people would help. The answer is that they did help, 12 out of 13 times, when the jogger happened to be wearing a home team shirt. On the other hand, joggers wearing a rival team shirt were helped only 3 times out of 10, and joggers who wore a sport shirt with no team logo were helped only 4 times out of 12. So, it wasn't that people treated an outgroup member as worse than anyone else—it was that they treated an ingroup member better than anyone else. Of course, the million dollar question is whether there's an effective way to reduce intergroup biases, and I'm pleased to say that the answer is yes. This week's assigned reading covers several effective techniques, but if we return to the soccer study, the researchers found a very simple way to increase helping and reduce intergroup biases. In a second experiment that was part of the same study, they changed the initial surveys to ask about being a soccer fan rather than asking people about being a fan of a particular team. And when they did this, people helped 8 out of 10 times when the jogger wore a home team shirt, 7 out of 10 times when the jogger wore the rival team's shirt, but only 2 out of 9 times when the jogger didn't wear a soccer team shirt of any kind. So, by getting people to think about their shared identity of being a soccer fan, rather than their identity of favoring a particular team, they got people to help joggers who supported the rival team. Well, this gives you a general overview of intergroup biases beyond traditional forms of prejudice in which hatred, fear, and other powerful emotions play a central role. One question that I've explored in my own research is whether some of these biases operate not only when the outgroup is of a different race, religion, gender, or sexual orientation, but of another species. For instance, do we see members of another species as more alike from one to the next than they really are— a sort of outgroup homogeneity effect in which animals are seen as relatively interchangeable with one another (cows are cows, pigs are pigs, and so forth)? Second, do humans show species-based ingroup bias, and if so, what are the consequences for how we think about animals? For example, if you eat meat, do you think of it as coming from something or someone? If you eat dairy products, do you associate them with a lactating animal? And if not, why? Does it make sense to talk about being prejudiced to certain animals, or does prejudice really require a human outgroup? And even if your view is that prejudice can only run from human to human, what can we learn about prejudice from how we think about animals? Here, for example, are some questions that I posed in a 2003 book on prejudice and discrimination. Is it significant that the word "mulatto" (often used as a synonym for half-breed) shares its etymology with "mule"? Or that race emerged from terms for animal breeding? Does it mean anything that the word "husband" shares a common origin with animal husbandry, or that rape was originally classified as a property crime? I hope that you'll use the discussion forums to consider these questions, as well as your own questions about the psychology of prejudice. You have a rare opportunity to interact with people from all over the world. All I ask, as before, is that you respect cultural differences and keep the conversation positive and focused on psychology. To help promote dialogue, the teaching assistants and I have posted a starter set of "springboard" questions designed to help everybody dive into the discussion. What you'll see are a whole host of sub-forums and discussion threads just waiting for you to jump in!