Hello and welcome to our module on talent analytics, an important part of people analytics. You might wonder what is talent analytics, what is talent management? This is something that gets talked about in a lot of quarters. People have different perspectives, different definitions on this. For our purposes today, we are going to talk about talent analytics as talent assessment and development. We are interested in identifying differences in ability. And we're interested in developing those abilities so that everyone in a firm is maximized. So this is a particular take. You can think of it as something deeper than performance evaluation. We had a module on performance evaluation. And at that point we tried to keep everyone thinking about employees as relatively equal. And we're trying to understand whether an employee is working hard or slacking off when we think about performance evaluation. Now we wanna go deeper and ask about assessing differences in employees. This is a harder task. It usually takes a longer time, and there are unique challenges involved. And so that's what the topic for today is, employee evaluation. We're gonna approach it from an analytic perspective, so we're gonna consider it talent analytics. A motivating case for talent analytics is promotion. This is kind of the canonical reason you would need to assess employability. And this is hugely important to firms who you put into power, into positions of power. Usually there are many contenders, and so it's a challenging assessment exercise. And adding to the difficulty is that the measures on which you're making this assessment are typically ambiguous. So what are you gonna do? And importantly how do you do this well? This is where we're going. This is the kind of thing we want to tool you up to do better so we wanna give you a few guidelines. This is the outline for the day. We're going to talk chiefly about challenges involved with talent analytics. There's some well-known traps that people fall into. Despite the fact that data is helpful, data can actually exacerbate some of these traps. Before we leave we're going to take a special topic, one of unique interest in the last year or two around tests and algorithms. And then finally we're going talk about some prescription, some practical tips for you in doing talent analytics better. So, first up challenges, data's good. A lot of data typically better, but data can also be misleading. If you're doing talent analytics, you're probably working with data. You're crunching numbers. You're crunching performance evaluation, test scores, 360 feedback, sales figures, employee morale, whatever you can get your hands on. If you're resourceful, you're probably crunching those numbers. But what do they mean? Before you can draw any strong inferences, you've got to navigate a few challenges. It's critical, in fact, it's critical if you're going to do talent analytics well that you navigate these challenges. Four chiefly that we're interested in and that we're going to focus on today. Context, interdependence, self-fulfilling prophecies, and reverse causality, each of these will be a separate segment. And we wanna start with context. So, it is well understood in psychology, and increasingly understood in organizations, that people neglect context when evaluating performance. We tend to believe when we see someone perform that that performance is due to some unique individual skill or unique individual personality trait. And we underestimate, we under-attribute the situation the person was in. So what are situational factors? They might have had an easy task or a difficult task. They might have had a helpful team or a harmful team. They might be working in a good economy or a bad economy. These are very influential factors. These are situational factors that don't have anything to do with whether they're a nice guy or a bad guy, a hard worker or slacker. These are situational factors that we tend to underestimate when we're trying to infer whether this person is good at their job. So this is so well known it's considered the fundamental attribution error. In psychology it's been this focus of study for more than 40 years now. And we know that people are inclined to blame personality traits, individual traits as opposed to situational traits. In this segment, what we're trying to get you to do is go back and consider situational trades, consider the context. When you're crunching the numbers, you have to figure out ways to make sure you're considering the context. So, there's a saying in Wall Street that is designed to help offset this bias. The saying is, don't confuse brains in a bull market. What they mean by that is, in a bull market, everybody's making money. Everybody's trades seem to work out. And people very readily infer then that they are good or that their portfolio manager, their investment manager is good. And these guys are saying, hey, hold on. That was an easy situation basically. The context was everyone made money. So they've boiled that down to a very pithy comment, don't confuse brains and a bull market. We want to build those kind of heuristics, those helpful corrective heuristics in the way you evaluate your talent. So an example, a couple of examples. We did, in the performance evaluation module, an example of this. So you may recall, if you've seen this module already, that when we looked at this extended example from the NFL of evaluating draft picks, and in particular, teams' ability to pick players. Teams' ability to hire the right players, the first thing we did was norm for expectations as a function of where the team was drafting. So teams that draft high in the NFL draft should get better players cuz there are better players available. Teams that draft low should get worse players. So we already demonstrated this technique of norming for expectations, and we want to build on that here. So what I want to do now is give you a new example also from the world of sports, but one that is especially designed to convey this context idea especially well. So this is from American baseball, and mid 20th century American baseball, and it's comes to us from Bill James. Bill James wrote about this in one of his books. And it's a comparison of two second basemen, interesting for a few reasons. They played at about the same time. This is Bobby Doerr of the Boston Red Sox and Joe Gordon of the New York Yankees. They played about the same time. They played the same position. They were both from Southern California. And they were both very successful. But after their careers, one was much more lauded than the other. Bobby Doerr went on to Hall of Fame election quite readily and was considered, with the passing of years was considered the better of the two players. Joe Gordon, who at the time had been well considered, slipped significantly behind Bobby Doerr and in fact wasn't elected to the Hall of Fame until Bill James started this campaign and this analysis. So let's dig into the numbers. Much as you might dig into numbers as you decide who to promote in your firm, let's dig into the numbers and see what they tell us and what mistakes we could fall into if we're not careful. I'm going to show you some baseball stats. And it doesn't matter if you understand baseball. All that matters in a few key columns higher is better. You can think of this as sales in your organization. You can think of this as employee morale. You can think of this as 360 degree feedback, whatever. This is a performance measure for these two players, Bobby Doerr and Joe Gordon. What I'm showing you are their batting statistics in their home ballpark. So they spent half of their careers essentially in their home ballpark. And then they spend the other half on the road playing away games. What's interesting about these two characters is that they played their home ballparks were quite different. So Bobby Doerr played at Fenway, a small hitter friendly park. Joe Gordon played in Yankee Stadium, famously cavernous, not as friendly to hitters. What do you see? Well, if you look across the key stats are Doerr had more home runs. He had quite a bit more hits, more home runs, more RBIs, which is runs batted in. Overall, his average was a little higher. His on base percentage was a little higher. His slugging percentage, across the board he's better in his home park. But, we've already said they're batting in very different places. In other words, the context isn't the same for these two guys. If we want to evaluate them, we need to put them on a level playing field. We need to make sure it's apples to apples. And the best way to do that is to look at the other half of their record, the other half being where they are, what they do in the visitor park. So what does that look like? Here the story is very different. Doerr still has more hits because in fact he has more at bats. But he has significantly fewer home runs, significantly fewer runs batted in, a worse batting average, a worse on base percentage average, and a lower slugging percentage. Across the board he performs worse than Gordon, even though the line looked very different when you were looking at his home park. This, though, is where we've contextualized the performance. This is where it's a level playing field. And if you don't break it out like this, if you only look at their career numbers, you don't get this picture. And you think they're about the same or maybe think Bobby Doerr's better. When you contextualize the performance, when you say how do they compare when they're on a level playing field, it's quite clear that Joe Gordon was the better batter. By the way they were known both as good fielders. This is what really distinguished them. Once Bill James made this case, he broke out the stats in this way, in other words he contextualized their performance. He made a strong argument for Joe Gordon making the Hall of Fame. And in fact because of Bill James he was elected to the Hall of Fame. This is the kind of promotion decision we want your firms to make. We want you to avoid the mistakes that the Hall of Fame made for years. We want you to add this kind of context. So another quick example a more recent sporting event, the World Cup a year and a half ago now in 2014. There was a match where the U.S. was beat by Belgium I believe and beat handily. In that match the U.S. goalie Tim Howard faced an unprecedented number of shots on the goal. He let two goals in, and so it might not look that strong a performance. But in the context of what he faced, it turned out to be an extraordinary performance. And Nate Silver, Nate Silver of ESPN's 538, famously for forecasting US elections did a little analysis after this game, after the whole World Cup as a matter of fact. And said who in the World Cup performed best? Who was the best goalie during this particular sporting event? And he named Howard's performance actually, against Belgium. Despite having let in two goals, he said that when you contextualize his performance, it was the best. Considering what he faced, he faced 16 shots on goal. 16 shots on goal, he would have expected, if you just ask across all matches, all goalies, all teams, when someone faces 16 shots on goal, how many goals would you expect them to allow? And then the answer's four and half. What did he allow? He only allowed two. So he actually gained as a goalie, his performance was to gain two and a half goals. That's a very different take on his performance than that he let in two goals, but it's a more contextually appropriate take. So only by considering the context of his performance can we actually understand how good it actually was. And once we do that we can differentiate Howard from the other good goal performances we saw in the World Cup. So in general, this is what we're pushing you to do. And it's gonna look different in each firm, in each industry. But you've gotta find ways to make the performance evaluations, the employee evaluations is really what we're talking about here, apples to apples. You've got to find ways to level the playing field. The most general way to think about this is performance relative to expectations. What would the performance have been, what does it look like relative to expectations? This might be driven by the person's team, by the product he or she is in charge of, by the industry they are participating in, by the broader economy, by the person's boss. All of these are factors that might in some circumstances make a big difference in how they perform. You've got to norm for that. You've got to control for that in some way. In order to compare person A working for one boss to person B working for a different boss, you've got to somehow adjust for that boss' effect on their performance. This is context. This is something we know people don't do very well. And it's critical to good talent analytics.