[MUSIC] The next topic addresses the Swiss cheese model. The Swiss cheese model was originally proposed by James Reason, who authors the book Human Error, published in 1990. Very, very important book that describes mechanisms of human error. If you get a chance, check that out. But in his book, he talked about the Swiss cheese model of an error. And so I wanted to go through it, because actually there's quite a few safety folks that don't really understand it. So I'm going to through it so that you do understand it. So pieces of Swiss cheese, or each slice, represents a barrier, a pharmacist, a physician, a nurse, a information technology, automated decision support. So each of them create a potential barrier for a propagating problem to intercept that problem. I'll give you an example in just a second. Now those barriers have flaws in them, represented by the holes. In a typical system, there are many, many barriers to try to trap errors as they occur. However, each step, each barrier, can have multiple holes that can allow an error trajectory to progress through each barrier, and if optimally unlucky, can reach the patient. [COUGH] So let's give an example, and I'm first going to use reductionist thinking. In reductionist thinking, we focus only on the error itself, and not all the interconnecting essential parts. So in this example, a nurse administers the wrong medicine. We focus just on that step, and we tell the nurse, be more careful next time. That should be our intervention, or that very commonly is our intervention, with reductionist thinking. Now, it's important to be more careful, but that's really going to miss the point of an overall systems fix. So let's consider now, that problem with holistic thinking. In holistic thinking, the step of nurse administration is viewed as part of a larger whole. It's part of the medication use system, so instead of just focusing on that last step and telling the nurse to be more careful, we should view the entire system more holistically, and so let's do that. So viewed in this way, our error starts by a prescriber ordering the wrong thing, and it's because of a knowledge deficit on behalf of the prescriber. Next, the computer wasn't updated because the drug that he ordered was recently added to the formulary. So the automated decision support, the alerts warning that the drug ordered might be wrong, failed to fire because we hadn't yet updated the computer. Next, because of a staffing problem, the pharmacist was overworked and didn't have time to spot that error. And lastly, because of a staffing problem and facility design that increased the amount of distractions that the nurse was exposed to, caused the nurse to make that final error. Viewed in that way, we can think of many more ways in which we might address this error sequence to improve the system. I'd also like to make another important point. One of the criticisms to the Swiss cheese model is that it suggests that everything is linear. Well, in fact, there's a lot of loops. So for instance, it may have been that that nurse thought that the dose wasn't quite right, and looped back around and called the pharmacist. And because the pharmacist misunderstood the question, the pharmacist said yep, that's okay, and then it loops back to the nurse. There's also other sequences of Swiss cheese, where you have an error, a progression, maybe leading to that prescriber's knowledge deficit. Maybe the computer update system could be done better. Maybe there was a sequence of Swiss cheese errors that led to the poor staffing on the level of the pharmacist or the nurse. So, while this is a very nice model to conceptualize the fact that many steps are involved with an error, and you usually don't fix an error only where it appears. But as Russel Akoff taught, typically we can fix a system other than where the error manifests, or the final error manifests. So with holistic thinking, we can then start thinking of fixes involving computer maintenance, staffing, distractions, staff education. So we have many, many more potential areas that we might take a look at to improve this overall system. So before I leave the Swiss cheese model, I want to talk about all of these holes that you see in the Swiss cheese and define how those holes act. First, lets take a look at those holes that hadn't yet contributed to an actual error. These are called latent flaws or latent hazards. So an example, when I walk down a hospital corridor and there's a little puddle of water there, nothing bad has happened. But five minutes from now, a patient walking down the hall might slip on that water and break their hip. So that's a latent error. That's an example of a latent error. The flaw would be a puddle of water that's a hazard, but it hadn't yet caused a problem. Now, the holes that actually did allow a real error to come through are called active errors. A failure, and as I've defined before, the institute of medicine's definition of an error is failure of a planned action to be completed as intended. It's an error of execution, or the use of the wrong plan to achieve an aim. That's an error of planning. So active errors may or may not reach the patient. In this example, we intercepted the error, so in the first two pieces of Swiss cheese, they allowed the error trajectory to progress. But in that third piece of Swiss cheese, that barrier blocked it, so we call that a near miss. There were two errors, but the final error trajectory did not reach the patient.