[MUSIC] I'm Dave Aron, and I'm in my 38th post graduate year of training, and I've done a fair amount of work in improving quality, measuring quality, and improving safety. Over ten years ago the Institute of Medicine released a report called To Err Human, that it had hoped would really revolutionize, the way we thought about the provision of health care in identifying some of the harms that health care produces. And it did spur a lot of things, but in point of fact I think that the progress has been remarkably little considering the amount of attention and the amount of investment. The problem in patient safety is, that while we have improved incrementally, we have thought, in pretty much the same mental models that we always have. It used to be, that patient safety was you know, it was applied, it was applied science. That was the mot generous view of it. Now, it is viewed as a legitimate topic for research, which is a fancy way of saying that there's funding for it, if you're a researcher. Unfortunately, this, as a discipline, has developed like many other disciplines, with its own language, its own secret decoder ring, its own journal, and its own epistemology. Which many protestations to the contrary, has remained firmly fixed in the positivist view that underlies pretty much all of science, at least as it is currently practiced. The problem is that we don't think about safety as an intrical part of the system. We've talked about improving safety culture, where we become preoccupied with a potential for harm. So, we think about things in advance we, defer to expertise, as opposed to rank. We've thought about introducing, some very effective incremental improvements, like checklists. And there have been rather wonderful improvements in very specific areas of patient safety, notably prevention of deep venous thrombosis, prevention of line infections, but we've also seen a lot of things promised to improve patient safety but, those promises have not been realized. What we haven't thought about, and what is absolutely critical in patient safety is thinking about the inter-dependencies. Our model of error has improved a bit. It used to be purely blame the last step in the process. We had a sequential model of error, A led to B, which led to C. You can look at, an adverse event and figure out exactly what happened. Unfortunately this is subject to rather severe hindsight bias. Another model which made this a little more complicated, if not absolutely complex, is a model like the swiss cheese model of Professor James Reason. Now this is a model in which an adverse event is conceptualized as having a trigger somewhere, penetrating all the various barriers that a system has put into place, to prevent that trigger from propagating into an adverse event, okay, with the last step in the chain, usually being a human error. We think about underlying or latent causes of error, as problems in the system, like sound alike, look alike drugs that set up a human, who is by vary nature imperfect after all, “errare humanum est”, or, “to err is human”. There are new models that are looking at this more from a complex systems perspective. And in a complex systems perspective, the epistemology and the ontology, that is to say how you know what you know, the epistemology, and what is the nature of the world, the ontology, are somewhat different from the typical positivist view. There are different schools of complexity, but probably the most popular one, has really much more of a pluralist view of things, which allows for different kind of knowledge. Sometimes complexity is defined by a complex system which has a number of characteristics, not the least which are interdependencies, the ability to adapt, having multiple agents which don't know how the system as a whole operates-- but there is some macro behavior that results from the micro behavior. One might think about life as being the complex emergent phenomenon, where you can take all the parts of a cell, and, and you know, mix them all together, and you don't get life, there's something about the way they interact, and these interactions are not readily predictable. They're also nonlinear which means they are non reversible as well. We need to think of safety as a property, as opposed to an it, a something. It's more of a characteristic of the way a system operates, and there are ways to organize your system in such a way to improve that characteristic property. Our understanding of systems can be improved by looking at these phenomena, and these systems through different lenses, different mental models. That's why a quality improvement project, whether it be to improve safety, or improvement of the performance of any system, actually, requires different disciplines. The idea that there's one discipline that does patient safety, that there is a patient safety office, a silo, that they are the patient safety experts, and you know, the rest of us just listen to them, just doesn't work. So, as you plan, your improvement efforts, think very carefully about including people with different backgrounds, different views, different disciplines and make the, your assumptions, make your taken-for-granted statements explicit, because often we are not aware of what those mental models are, but we just kind of assume things. And it's very important to make those explicit and then question them, and I think that will allow us to make lots of improvements. And one of the reasons is that, because of safety being a characteristic, a, more of a phenomenon than an “it”, one of the issues in looking at systems and their interdependent parts, that critical importance of interdependence results in a phenomenon, which is uncomfortable to many scientists and many traditional researchers, and that is context dependence. So the issue is, is not, does something work somewhere on the average, but rather, what works under what circumstances, when, and for whom, and those kinds of questions tend not to be answerable, they're certainly not answered and I suspect they may not even be answerable. In randomized control trials which are held up as you know, kind of the gold standard for evidence, they actually try to balance out the context but context, the interdependencies, are so great in number, that it is really not possible to balance them all out, at least in, at least in, in my view. Where does this leave us? Quality improvement, improving patient safety is not simple. It's not even complicated. It's complex. I wish you the best in thinking about it, and thinking, who are the beneficiaries of this improvement in safety? It's not only the patients, but it's us as well. It allows us to practice in difficult circumstances and do things that may be emotionally difficult, but when we don't have to worry as much about an error, because we've designed a system, that will detect those and prevent them from propagating or prevent them from even happening, it makes life, a lot more easy. [MUSIC]