[MUSIC] So today we'll tackle expert knowledge. We know that as we said before that there's all this knowledge in the expert's head and we got to get it out but first what do you need to accomplish? What is the technology we need to end up with kind of above the line? And there are four types, it is domain knowledge, so we could be talking about the medical domain, or we could be talking about the financial domain, or the management domain, or if we're talking to patients as the expert, what it's like to be a patient domain. There's the work flow. How do they do things? There's the management, how they accomplish things. And then the tradeoffs is how do they balance the different and competing that they need to deal with? So these are generally four different areas, big overlaps among them but it gives you a sense of the types. It's not just content. In broad strokes, there is knowing a lot and knowing how. And in knowing what domain, there are two types of knowledge that we'll be talking about. One is what is called System 1 and the other is System 2, very creative names. And I can't give you a good way to remember them, but System 1 is fast and frugal. It's more closely related to implicit knowledge. It's gut thinking and we'll say a little bit more about it in a minute. In system two, I like to call slow and tortured as opposed to fast and frugal. This is what's called rational decision making and cost benefit tradeoffs and so forth. And you can see the books related to these posted there. And then this knowledge how is again, how does somebody go about doing things? And that's kind of a third type of knowledge. So I said that that first area is implicit knowledge. There are two perspectives on the implicit knowledge that's in your head. One approach says well let's compare you to a computer, let's compare you to a rational engine. And, it's shown that compared to a rational engine, we tend to make mistakes. One class of mistakes is called biases. Now this is not racial type of biases or those type of social biases. It's more simply areas of thought. A classic example is as follows, my son loves detective novels. He loves Agatha Christie and he was telling me about a One story where it hinges on thallium poisoning. Now, I've never seen thallium poisoning. You probably have never seen thallium poisoning, it's very rare. But let's say I see a case of thallium poisoning which presents as flu symptoms, and then some hair loss. And so, I see this patient. I'm not sure that it is. I work him up. It's thallium poisoning. A week later somebody comes in with flu like symptoms. If I say, this patient has thallium poisoning, I'm now suffering from what's called bias of availability because in fact the patient most likely has the flu, not thallium poisoning. So the world is complicated and in order to deal with the complications, we have these things called heuristics. The general rules of thumb and how to deal with the world. So an example is the expression in medicine, if you hear hoofbeats outside your door, think horses, not zebras. Meaning that, in most places in the United States, we don't have zebras. Zebras running around, so hoofbeats probably mean horses and not zebras. So that's a heuristic for dealing with bias availability. Pattern recognition is another implicit sort of knowledge. If I see a pattern, now that I tell you what I said about thallium poisoning, the next time you hear about flu symptoms and hair loss, you're going to think thallium poisoning, because that's the pattern. It may still be that something else is still more likely than thallium poisoning. But pattern recognition is a tool that we've developed over millennia if not more than millennia. And so people like Gary Klein say, this focus on cognition is error is not fair, because human beings are actually very successful at making decisions and probably more successful than many other species. So clearly the way our brains have developed to make decisions, we shouldn't just disrespect them that they aren't like machines but they have their own power. Even emotions in that regard are powerful because they help direct our attention, they tell us what's important, and they influence how we think and what we do. I won't get into political and social issues, but clearly those are implicit. So that's where you do get issues like racial biases and other biases of unconscious activity. There are some people like who believed that much what we call system one and system two are really just ways we store memory. That gist memory is kind of like I get the gist of the problem that sounds like system one or verbatim I get all the details that's kind of system 2. But let me turn my attention to system two,because you are dealing with a computer. You want the computer to make rational decisions. You want to be able to push the limits of rational decisions as far as you can. So let me give you a framework, or a heuristic if you will, for thinking about rational decision making, I hope you get the irony there. And here, the framework is called you shouldn't. Remember I always talked about IT as can and informatics as should. One reason is that I have this framework in the back of my mind how you should make decisions. Here are the eight components of a decision problem. If you know them, you understand your problem. If you don't know all eight, then you don't understand your problem. So the first step is who are you talking about? When you say you should do something, who's perspective are you taking? Is it the patient's, is it the doctor's, is it the system? The structure is how you are structuring your case. Are you using a formal model, a mathematical model? Are you using logical model? How are you constructing that case? H is, who's coming into the model and what is the context? O is the outcomes that that perspective cares about. U is uncertainties, and to make a long story short, they're probabilities. They can be prior probabilities like zebras are kind of uncommon in the United States. They could be causal probability, how likely it is to cause that? And there could be likelihood ratios, how powerful is this evidence at updating my belief about something? L is the list of actions, what can I take? What are my alternatives? D is the desires and tradeoffs that I referred to already before. And T is the time horizon. So in the emergency room your time rates could be minutes to hours, in an intensive care unit, it's hours, maybe days. On the floors, it could be hours, also to days. And in outpatient, it could be days to years, if you're talking about chronic disease. Now I already showed you a structure, a decision tree you may recall it. It had a think about whether or not you should introduce an intervention or decision support. So this is an example of a structure and I want to show you how it embodies those eight components. Ironically the one thing that's missing which is the most important is whose perspective are you taking and that we kind of have to write off on the side, and the tree itself or decision analysis tree is the structure. The H is who's coming into the model or the context? So here say whether what it says, use decisions use support or not, left unstated right now is okay, what type of hospital are you dealing with? And what are the attributes of the hospital, and a whole bunch of other things that should go over there? The outcomes are listed over there, very simple. In this case they're bad outcome, good outcomes, they could be death, they could be morbidity, whoever the perspective cares about. So the uncertainty is how likely those things are to happen. The alternatives you can see listed over there, deploy or not deploy. And the desires, I don't list explicitly, but they could be a tradeoff between, let's say, money and stroke, or money and loss of time from work. Whatever it is that the outcome results in, and how I feel about them would go under D. And the time horizon is not necessarily explicit, but that's kind of how much time is going from left to the right. So if you take our old friend that Bilirubin rule, you may recall, we said that for the orange part on the graph. We said that if you transcutaneous bilirubin was over 7 but less than less than 20, your age is less than 12 hours, then you're high risk. Now, what does high risk mean? Does high risk mean you observe? Does high risk mean that you should use bili lights, or does high risk mean that you should do an exchange transfusion? Those are the alternatives. So let's just see what the you should looks like for this case. So the perspective we're taking is probably the patient, the baby, the baby and maybe the parents' perspective. The structure I just showed you, the context, well newborns or full term newborns or full term newborns without a reason for jaundice who have jaundice. There are reasons for having jaundice which probably would take you off this model. Let's say you had a RH baby, the outcomes, the brain damage of kernicterous, which is a bad thing. The uncertainties here are how likely are different levels of bilirubin are to cause brain damage or how likely is a level and even how likely is kernicterous to cause specific sorts of morbidity down the road in terms of cerebral palsy and so forth. The list of actions I just gave you are observe, lights or exchange transfusion. And the tradeoffs, well, brain damage is bad, but an exchange transfusion can also cause damage that I'm going to list right now, and how do you balance those two types of damages? Then the time horizon here, on the one hand it's hours and days acutely, but it's also the child's life for decades, these all go to it. Now, from a system two perspective, I would need to talk to the expert to establish each one of these things. I would need to get agreement on the perspective. I would need to get agreement on the picture, the model. I need to get agreement on who are we talking about? They might say, for instance, that RH babies are not being, we'll deal with them separately. We have to get used to what items are we really talking about. You can see missing here is lawsuits. Well, lawsuits is the doctor's perspective, not the patient's perspective. Uncertainties, I have to get from them for probabilities. The actions we've said, the trade-offs of the the two minute and the time horizon we had to deal with. The probabilities and the tradeoffs are hard. Try asking an expert, even an expert, how likely is something to happen? They feel very uncomfortable giving me a number without going to literature, without giving some other thing. If I asked them how do you balance death versus brain damage, or brain damage against damage from the exchange transfusion, again, they're very uncomfortable. So system two is a difficulty thing to get from your experts. And so in our next talk, we'll be talking about the non system way of getting information in from experts. But when you've got to be keeping the system into such thinking in the back of your mind, because perhaps you can map from what they tell you to this rational model of decision making.