>> And so, we know already that, in general, different test are more versus less effective of predicting how a candidate is going to perform on the job. What people analytics allows us to do, is go beyond this general, and start thinking for this specific job, what do we know about what makes people effective and what doesn't? So, in particular, with people analytics, what we're usually trying to do with hiring is takes people's performance, how do they do at their job, and try to understand what drives that. And so, on the one hand we look at a variety of performance measures. And I think both Kade and Martine are talking a little bit about some of the challenges of using performance appraisals. We know that these are not great. There's probably some information in there. And so, predicting who is likely to get rated as high versus low at least tells us who's gonna be seen as fitting in in this organization. On top of that, there are other things that you might want to predict about who's going to be a high performer. So, increasingly in organizations, there's a lot of objective information. Obviously, if it's a sales job, you can think about who's going to have the most sales. Call centers. You now have huge amounts of data about things like what people's average handle time is. Absenteeism. Their availability. We also have quality assurance. We have customer satisfaction service. And so, in some organizations you have a great deal of information about each individual on a more kind of objective basis. There are other things we also might want to be able to predict about people. So, attrition for example, is a fairly hard measure. Tells you something about who fit and who didn't. It can be important in its own right. So, there's some organizations that spend a lot of money training people. If that's the case, one of the important things you wanna think about in your hiring is, is this person going to stay long enough for us to get a return on our investment. You may also be interested in understanding who the people are promotable. So, the idea is we say, okay, we want to know who's going to do well on these various measures. And we want to figure out among the people who are replying which are gonna be the best bets. And so, what you do is you use those as kind of your variables that you're trying to predict. And then, on the other side you put a series of characteristics of the individual that you know at the time they're applying, say which of these actually predicts performance? So, one of these you might look at is the resume, their background, their characteristics, and so on. So, one of the things that Google has been a pioneer in this area found out when they did this was that they had been famously for years asking for college transcripts for everybody from junior to senior people and looking at their GPAs. When they actually sat down and looked at what predicted performance, they found that once people had been out of college for more than a couple of years, GPA had no value whatsoever as a predictor. And so, they said, okay, this is not something we should be screening on. Interestingly, apparently another investment bank did this recently and actually they found in their case GPA was predictive of performance. Which speaks, in part, to the value of doing this in different ways in different organizations. Because what predicts performance is going to depend in part on the nature of the role. Another thing that you might look at is test scores. So, if you run a series of intelligence tests, personality tests, job knowledge tests, and so on, which of those are going to predict who performs well on the job. And then you can also look at interviews. So, in a structured interview, like I say, rather than just sitting down, getting to know somebody, what you're really trying to do is figure out where they score on the various different attributes. And so, you should have a series of questions that are aimed to tap into those attributes where you can rate them kind of high, medium, low on that attribute. There's then the possibility to go back after a year or two and say, okay, which of these questions and types of questions actually seems to predict whether or not they're going to do well on the job and which don't. Again, another nice story that Google shared with the public, they were famous for a series of questions that basically asked people to think out of the box. So, these kind of, how many golf balls could you fit in a jumbo jet? How many call boxes are there in Manhattan? I remember when I was applying to consulting jobs about 20 years ago, somebody asked me how many ties are sold in Great Britain in the average year. I just thought, who knows? The idea of these is not that you know the answer, but rather that you try and think through and that they can kind of stump you and see how you respond to a question you haven't thought about before. Can you be creative? Can you structure an answer thoroughly, all those sorts of things. And so, it's this nice idea behind it. We can really see how smart people are. Again, turned out not to work at all. Completely unpredictive of performance. And so, on the base of that, they're trying to move away from these questions, persuade people not to ask them. So, you can figure out what kind of questions work. You can also start even to look at which interviewers have done a better job of predicting who's going to be a high performer and who isn't. And the idea is on the base of this variable to choose who interviews and even what kind of tests we use. Some people have gone even further. So, JetBlue told a very nice story about how they are using these analytics. When they hire flight attendants, one of the big questions was is it more important that they're friendly or that they're helpful? Because you can have people who smile a lot and say nice things, but then the person will actually lift your bag, help you get it up, looks for people who actually need assistance and that sort of thing. And so, it really wasn't clear to them which one was better. Not clear to me either, although my colleagues will tell you on either basis I am unlikely to get a job on JetBlue. When they tried to figure this out, what they did was they ran a little experiment. They went and they asked their customers to rate their flight attendants. Is this person friendly? Are they helpful? What do you think. And also to rate their overall customer experience. Are they likely to recommend JetBlue to somebody else? And to see, was their rating of the airline higher when the customer was friendly or helpful? And on this basis they discovered that actually it's helpfulness that was more valuable. And so that enabled them to further fine tune their hiring. Once they knew what was really important in this job, then they could think about how are we going to go out and actually screen for that. And so, the idea, then, is basically to take these predictors and see, based on what we know about the people in our organization who are performing well, which of these predictors do tell us something about what people's performance is likely to be? In doing this, there are a few things to bare in mind. Okay? First, the most obviously, you want to be comparing apples with apples. So, if you're looking at differences in performance, you want to make sure that people are doing the same work in the same place, or at the same level. All of those sorts of things. You really want to hold that constant because, otherwise, it's quite possible that what you're really seeing is differences in what people are supposed to be doing rather than actually differences in their attributes driving performance. A pernicious version of this is you want to be very wary of time in the job. Generally, we expect certainly we people have been in the job for a moderate period of time, they're going to be better at what they are doing. So, are you really comparing people new versus medium in the job, or are you comparing people who are all in the same job at the same time? The second thing that you want to be concerned about, and I'll get into this in more detail in a minute, is the idea of of disentangling influences. Right? So, often you have multiple different attributes of people. So, they vary on their education, their experience levels, what it was they were doing. So, we see their friendliness, their helpfulness, all of these sorts of things. And the challenge is if you just look at one of these variables, is it that that's predicting performance, or is it just that it's highly correlated with something else. You wanna disentangle those influences to make sure you really are getting at the attributes that drive performance. One of the things you wanna do there as well is probable just apply a bit of common sense. If you look at any data set you're gonna see a bunch of different patterns. Some of them are gonna be real. You would see them in any similar data set. Some of them are not. So there are various statistical techniques for trying to figure out which are the real ones, but also probably applying a modicum of common sense. Does this make sense? Can we figure out why this would be, is an important thing to do. The final, and more tricky piece, is that it's also worth thinking about what we're doing here is we're taking the people that we hired and seeing how does what we knew about them before we hired them help predict how they're performing today. Well, that's not the same sample as our general applicants. Okay? The people who we have now, we weeded out a bunch. And so, it's not exactly the same as predicting who out of the general applicants will perform well. It creates a number of statistical biases that I think, if you're really serious about this, you need some attention to.