Most people who write algorithms aren't racist, and don't want to write algorithms that have a racist consequence or make racist decisions. In fact, race may be farthest from their minds when they wrote the algorithm. Nevertheless, the way the algorithms actually operate could have significant racial implications. For an example, let's watch this video clip of Professor Latanya Sweeney, leading researcher on computer privacy [MUSIC]. >> One day, I'm sitting in my office, and a reporter was there, and he types in my name into Google because we were looking for an old paper of mine. And up pops this add that had implied I had an arrest record. Meanwhile, I show him where the result is we were looking for. He says, no, no, tell me about this arrest? [LAUGH] And I said, well, I've never been arrested! And he says, then why does it say this? And we go on and on, and we began searching more and more names. And we started noticing this pattern. And he jumps to the conclusion, it's because you have one of those black names, he says. And I said, that's ridiculous. It's a computer, computers can't be partial. And I would spend hours and hours trying to show him that he was wrong. And, eventually, I do this cross-country, 120,000 ad research project, when I actually learned that it was absolutely true that these ads showed up implying an arrest. It had nothing to do with whether or not the company or there was an arrest record under that person's name. But if you had a name that was primarily assigned to black babies, you were more likely to get an ad indicating arrest than if you had a name more assigned to white babies. >> So lets think a little bit about why what we saw may have happened. When search engines choose what ads to show, they do so based on the ads that are likely to be of interest, that the users are likely to respond to. And if they have figured out that somebody who has a name of Latanya, there's somebody who's looking for a search term of Latanya. Is likely to be interested in services that might be provided to criminal defendants, such as a bail bonds service or something that you might want if you were arrested. That's a reasonable ad to show. That's a,ad that this person, who you don't know, is more likely to respond to than thousands of other ads that you could possibly be showing. This decision is purely a statistical frequency type of decision that the algorithm has made, with no thought whatsoever towards race or even racial implications of certain names like Latania being more prevalent in the African American community. Nevertheless, there can be real implications, as we just saw in this video clip. And understanding these real implications of supposedly neutral algorithms that are operating in a completely data-driven manner. Is something that anybody who writes algorithms or practices data science needs to think about and own the consequences of.