So, how do we elicit expert knowledge that's either implicit or system oneish. So, here's what we're dealing with, the expert knowledge box on our framework. So, the first thing to recognize is that when you ask a user or client what they want the system to accomplish, you're already beginning the process of eliciting implicit knowledge. So, if the client says they want a surveillance system and you say "What do you mean by a surveillance system?" They start telling you. Hopefully, you're doing active listening because you're trying to listen to the words they use, the concepts they use, the things they want to accomplish, and how they want this thing to work. All of that is implicit knowledge. They're more formal approaches called qualitative methods to get at what I just listed, the concepts and what's important and the how. Some of this can be done by observation especially if you want to see how things happen. Or we look at what we like to call their information or knowledge of other facts. What are the notes that they write down. What are the checklist that they write for themselves. What are posters they put around. What are the emails that they write. Member checking is the act of going back to the expert as we're in checking into "Did I understand you correctly?" You can move from the qualitative approaches to the more quantitative through what's called a modified Delphi. You may recall that there was an oracle in ancient Greece, a woman in this mountaintop, we're told that there are these volcanic gases and people come and ask a question and she will give an answer and because she was high on this gases nobody really understood what she was talking about, but they viewed her as the expert and the oracle. So, what you would like to do when you're getting information and knowledge from a group of people is you'd like to get consensus, but you don't want to get it as a full group. You want to talk to them individually and get their ideas, if not estimates of things separately, and ideally get them to sign off on whatever they ended up saying. As an example let's say that they say that if anybody has a risk of death of over 100,000 they should go ahead and get treated. Let's see you come in and show that in this particular case risk of death is one in 10,000 and therefore they shouldn't get treated, and they say no, I don't mean that. I really mean one in 10,000. You said one in 1,000. So, this way it's helpful to you. If they disagree they can have a discussion. Why did you give me a wrong estimate? Why do you dispute the model, and you can have their argument. So, these approaches are used for both system one and system two type of knowledge. I already alluded to that you can have individual interview or group interview. They have their pros and cons. The individual interview you avoid group think. So, that in a group if you ask a question to the group and one person says something everybody else might say check. Might nod their head and say yeah right. Where they actually disagree, but they don't want to take the effort to disagree. So, what's good about a group interview is that, you get the people talking amongst themselves and they generate the knowledge and they point out what the language in the concerns are. So, that's a powerful thing, and then you can do it without an interview, could do with a survey either in the paper, by computer. These are all ways of getting information and knowledge from your experts, and even when you're asking questions whether in an interview, whether on paper, you can have open-ended questions like what's important to you. You can have a scenario based, where you say here's a case. What would you do in this case? And then the challenge to you is to choose a case, or choose a small number of cases that actually cover the entire curve of the field, and then you can do quantitative elicitation. What number would you attach this, and I won't go through all those. Now, I'm giving you in one slide something that is taught in two full semesters at the school of public health. So, if you feel that you haven't learned a lot of details, I'm sure there are courses you can take about that. There are not lot of computer-based solicitation tools as far as I can find. There are ways that experts can type in their knowledge. The BioPortal for instance that I mentioned before, it's closely linked with a tool called Protege were experts can type in their knowledge. But in terms of doing this sort of opening a conversation, or here's a scenario, what would you do? What's important to you? Here's one from NASA, but I haven't found too many others. Once you've done your interview, once you've gotten, once you audio taped what they're saying, you've got to transcribe. You now want to find what are the concepts that they care about and what do they do with those concepts. The first step is to get what those concepts are. I'm hoping that you're hearing already that this is related to what we called ontologies before. So, last week we talk about the concepts that are important and here's one way to get at those concepts, and so sometimes for every domain we want to create an anthology which is basically an agreed upon vocabulary, if possible link the way ontologies link. But if not, then it's just a vocabulary or taxonomy and we just call them ontologies because we like to be fancy. But there are tools to help you see what are the concepts that have been mentioned. How do I put words that have been used. How do they put them together to discern themes? So, somebody says. Badness, somebody says morbidity. Well, they're the same concept really, right? They're just different ways of saying the same thing or are they? So, that's a bit of a challenge. So, this bottom up approach to saying let me take the words that they use and build them up into an ontology, when we're talking about this whole grounded theory where I kind of start off with not knowing anything about the domain and build it up. Another approach is theory-driven. I know it's important. So, if I'm talking about pneumonia, I don't have to pretend that I don't know that they care about symptoms and signs. I know they care about symptoms and signs. So, the thing is which ones do they care about and which ones they focus on and how important are they. When dealing with pneumonia, pretty soon fever and cough are going to come up as concepts that we care about. So that would be a theory-driven approach. How do you know you're done with qualitative elicitation? And the main word that we use in qualitative work was research or this sort of work that's done for building systems is saturation, and saturation means I've asked the first-person, they're open to concepts. I have asked the second person, well, these still are the same concepts the other person did, but now here are a couple of new ones. So, as I go along with my third, fourth, fifth person, I'm running out of new concepts. Very often you only about five or six people to get saturation. What a lot of people don't do is the next step which says "Okay, I've spoken to this class of people. Now let me talk to another class of people." So, for instance, if we're talking about decision support for pneumonia let's say. For it we want to talk to the doctors, we want to talk to the nurses, we want to talk to the pharmacist. So, I'll talk to doctors, I'll get a bunch of concepts from them, get to saturation. Now I go to the nurses. The nurses will have some concepts that the doctors didn't have. So, I now have a whole bunch of new concepts, so it goes up. Maybe not as high as we started, and then again I talked to three, four, five, six nurses and I run out of new concepts. Now I go to talk to the pharmacists, and the same thing happens, and finally now I've exhausted. So, at this point, I feel pretty confident that I have a really rich vocabulary, if not ontology, of the things that the system as a whole cares about, whether it'd be doctors, nurses, or pharmacists. Just to wrap up this section on implicit knowledge elicitation, clearly, it's great to get the lay of the land. If you walk in, you don't know anything about pneumonia, before you do anything quantitative or formal or compute anything, you got to know what the lay of the land is. It's a great way to find out what's important, what to focus on, and it's great to get sign off on explicit statements of expertise whether they are formally represented in system two type thinking or not. As I alluded to before with talking about the story and what's acceptable, there's no better way to get at those better issues then through the scenarios and these talking about what's important. So, the pros; it's great for domains that are not well-defined, because they've got to figure out what's going on. It's great when experts are available and its great when the experts are cooperative. Expert time is a very precious commodity. The cons are that it's resource demanding. I got to get the experts. It may be too sensitive to the individual expert. So, if I talk to Hopkins only people, well that's great for a Hopkins solution. But if I now want to "Sell the knowledge that I've gained from the Hopkins people to another institution, they may be Hopkins specific things like how we manage disease that you may not recognize is particular to Hopkins and not to other places." Or if you do it in the United States it may not be transportable to more resource constrained settings, and just as it's great for when the domain is poorly defined, it's overkill when the domain is already exhaustively defined. I already know that fever and cough are important, I don't need to go through a qualitative process. So, we spoke a little bit about this system too. What we want to aim for with decision model at the back of our mind. Here we get to the qualitative issues and know that we can move on.