[MUSIC] So what measures should we be employing? This is something which you should plan ahead about, you should have already established what measures you will be using and the feasibility of collecting them before anything starts. A sort of a general question is what are we trying to measure? Some things related to patient safety that we might want to measure are errors, adverse events and perhaps most pertinent here, safety targets. Errors are, per the IOM, the failure of planned actions to be completed as intended or the use of a wrong plan to achieve an aim. And these include latent errors, that is defects that reside in the system, poor design, poor staffing, active errors that are errors made by frontline health care staff, such as dosage errors or perhaps the incorrect performance of a task, which might be relevant to our lecture today. Adverse events are harm caused by healthcare. And these, of course, are the outcomes that we might be interested in. Safety targets include medication errors, healthcare acquired infections, surgical complications, device complications, patient identification errors or even death. Some of these are processes. Some of these are outcomes. And we might want to measure any of these, depending on the specific project that we are dealing with, trying to fix and hoping to sustain. There are at least four basic methods of collecting data in general, observation, self-report, which might be in our case interviews or questionnaires, often of healthcare workers, sometimes patients, testing. And physical evidence, and in most cases, this involves document review or review of other records, perhaps. Many documents, of course, today are no longer on paper but are electronic. There are a number of measurement methods that we might apply, some are prospective and some are retrospective. Prospective methods include direct observations of patient care, and these are sometimes used in our setting here. Cohort studies, perhaps a bit formal for our purpose. Clinical surveillance, which may be ideal, and we willl discuss in a moment. Retrospective methods include record review, including of charts and electronic medical records, administrative claims analysis. These can be helpful, but they're a bit post hoc for our purposes. The same is true for malpractice claims analysis. M&M conference and autopsy results, incident reporting systems can occasionally be helpful, again, here. One of my favorite papers on measuring errors is from Eric Thomas and Laura Peterson, published in the Journal of General Internal Medicine in 2003. And this was really more of a think piece and review, but they proposed that different kinds of errors are best measured by different specific methods. Latent errors, active errors and adverse events might be best captured by different measures. Adverse events are difficult to capture depending how frequent they are. Some kind of active clinical monitoring may be the best way to find them. Active errors again don't happen all that often, direct observation might be the most useful for other types, for certain types of active errors. Latent errors include problems with managements, with organization, with funding. And one can look at chart review, one can look at administrative data, and other kinds of data including perhaps incident reporting systems, even malpractice claims or M&Ms where many different parties weigh in, provide their perspective and describe what happened. So we want to think which of these methods might be most appropriate for the project that is at hand. Direct observation is great for active errors for which data might otherwise be ephemeral and never available. Direct observation can be quite precise however, it's labor intensive and expensive. People need to be trained at how to do it, this takes time, sometimes there are so many things happening that the observer experiences information overload and can't capture everything. There may be Hawthorne effects. That is, the observers may change what actually happens in front of them. There may be hindsight bias. In fact, if the observers do not record in real time and then see that an outcome occurs, they may have selective recall and direct observation doesn't tell you very much ever about latent errors, those things at the blunt end of the system. Cohort studies are a bit impractical for much of what we're talking about, unless those cohorts can be assembled unobtrusively, perhaps using clinical data. And this is where clinical surveillance comes in. Clinical surveillance is an effort that builds data collection into the clinical workflow. One of the best examples might be for anesthesiologists who sit in surgical cases and record at very regular intervals vital signs and other key pieces of information as they occur in the course of healthcare. Clinical surveillance be potentially very accurate and precise and can actually also capture adverse events and what happens around them. It can be good to test the effectiveness of an intervention to decrease a specific adverse event as long as that adverse event occurs with some frequency. And they can or ought to be a part of the workflow. However these again are expensive and again are not so good at detecting latent errors. Chart review or I should perhaps say record review uses readily available data. This is used commonly, and they are evolving and developing methods to review electronic records. Judgements from the record about whether or not an adverse event happened may not be reliable, that is the judgments of whoever it was was noting events that clinician may not see them and may be somewhat biased about reporting them. Having individual clinicians review records is expensive because the time of those people is expensive. Records may be incomplete and there may be missing key bits of information or entire records may be missing. And, again, there's often hindsight bias. If someone viewing a record knows what the outcome was, they may see things differently and distribute things differently than they would otherwise. Provider surveys can be really good for latent errors. We do not do as much of this as much as we do surveying patients, but frontline clinicians know a lot and can tell you a lot about their own work places and what they do. These data also might otherwise be unavailable, for example about the processes that are performed everyday and how well they are performed. These surveys can capture quite a few different dimensions, and they can cover a fairly comprehensive range of factors or events. Here again, there's hindsight bias. If there was a bad outcome providers might be more likely to respond that the care was bad. And for provider's survey to be valid you need a good or fairly complete response rate in other to be able to say that the responses are representative of what is actually being observed by a group of providers. So these are advantages and disadvantages of different measurement strategies and kind of measures. One question is, which of these measures might be best fit to the task that you are trying to improve and ultimately sustain? Availability is crucial and available measures, which can be collected without additional cost or effort, have a much better chance of being collected and a much greater chance of being able to be collected in a sustained way.