Welcome back. Now we're in Section C. In this section we talk about minimizing bias in including studies. That is, we're going to specifically be talking about information bias. So when we talk about information bias, whether we're talking about random bias trials or observational studies. We're talking about getting information that may not be exactly true from the patient, from the doctor, whoever is filling in the data collection form. And, what we want to do is prevent or avoid information bias, because we want all the information that we record, in any kind of study we do, to be accurate. In fact, the study depends on it. In deciding whether there's an association between an intervention or exposure in an outcome, depends on the information about both being accurate and correct. In a randomized trial, what we do is we mask or blind the participants in the study, the doctors or other healthcare providers, and the people who are assessing the outcome in the study. Because participants, for example, have different expectations and behaviors, they may give out different types of information. Certainly the way I perceive pain may be different from the next person. We just experience pain differently, or we express it differently. But we want it at least to be expressed correctly for how I perceive that pain. We want it to be unbiased. That's not to say that it's correct on some global standard of zero to ten, but it is to say it's unbiased. It's how I perceive pain. And for me to say how I'm perceiving pain, it's important that I not know what intervention I'm receiving. Because that might affect how I perceive pain. So when you're looking at an article, as you may be, to do a systematic review, it's important that you look to see whether there might be information bias in the way that information is transmitted in the study. For example, many studies say, this was a single masked study, or this was a double masked, or double blinded study. But that really doesn't tell you anything. Who's double? Who are the people who were blinded? Were they the patients and the provider? Were they the patients and the outcome assessor? So it really doesn't tell you that much. We want it to. We want to know what they meant, but there really is no way of finding out what they meant when they wrote this. And so, it's unclear, for example, whether a study was blinded in the way we want it to, to prevent information bias. Do we know whether masking or blinding was broken in any way? Were there any patients who found out what they were taking? Or was it possible to guess, for example, if the two interventions didn't taste the same or didn't look the same. And so think about the impact of masking or of information bias on your study results. Now some studies it's not possible to mask or blind. For example you might have a surgical study where people have two different types of surgery. Or one group has surgery and the other group does not. It's too bad that we might ding a study because it's unmasked and it wasn't possible to mask. And you have to take that into consideration, as well. So sometimes it's just not possible. Other times, and I've worked in studies like this, a surgical study for example, where one group of patients got the surgery and the other patients did not. Even though it wasn't possible to mask the provider, for example, or the patients, it was possible to mask the outcome assessors. And so they didn't know, after the surgery had been done at some time and you couldn't see the scars anymore, it was eye surgery, they didn't know what surgery the patient had had or even if they had had surgery. And so in that way, the outcome assessors don't know. And as long as the outcome, for example visual acuity, is being measured and written down by an outcome assessor who's masked, it doesn't matter really if the doctor knows or if the patient knows, unless in fact the patient is acting differently. So assess this carefully. It's a very difficult area to tell just from reading a paper, but it's very important in terms of your assessment of the possible risk of information bias. So we consider studies at low risk of information bias, if there was masking, and it's unlikely that the masking could have been broken. And again, you have to say who was masked. And sometimes there was incomplete masking but the outcome is unlikely to be influenced, for example, death. So for example, the study may have been unmasked in terms of whether a patient received surgery or not. But if the outcome is death, it's unlikely to have been influenced by knowing whether the patient had surgery or not. At high risk of bias are studies that are not masked or not blinded. Those that have broken masking, that is, it was easy to break the mask and see what treatment you were getting or the patient was getting. And where the outcome is likely to be influenced. For example, an unmasked study where the outcome is pain, is highly likely to be influenced by knowing what you got. I know I would be influenced knowing I had placebo versus an active treatment. So another meta-analysis was done by Peldel looking at the effect of double masking, or double blinding, on the estimate of effect. And you can see here, there are seven studies that were done, and again, there's a ratio of odds, ratios in studies that have double blinding, or double masking versus those that didn't. And what they found is that trials without double blinding tended to show a more favorable effect of the experimental treatment. And this confidence interval just barely touched one. And so although there is a statistically significant effect, probably, do you want to consider that statistically significant? It's a smallish effect, and I think that's why people put less emphasis on information bias than on selection bias. They put less emphasis on masking than on allocation concealment, even though masking is important, for obvious reasons. And it's actually more intuitive, I think, than allocation concealment. As you might expect, Wood and his colleagues did a meta-analysis that examined the effect of masking on odds ratios. And similar to the other results for allocation concealment. What they found is that when they're subjective outcomes or mortality other than all cause mortality, there tended to be a potential bias, where studies that were not blinded tended to be more beneficial for the test treatment than studies that were blinded. So again, I think when a study is looking at something at all cause mortality, other than objective outcomes, the importance of blinding or masking the participants, the health care providers, and the outcome assessors becomes more important. And this is probably typical of most studies today, where we're looking at quality of life and outcomes that matter to patients. That ends section C.