Welcome back to section B. In this section, we're going to talk about minimizing bias in the included studies, specifically selection bias. So here's a pictorial graph of what I am talking about. Selection bias, which you can see on the upper left hand side. Usually refers to a bias in how the treatment was assigned. The first thing we want to look for is, was there random sequence generation? Was the trial randomized? And, then, also related to selection bias is, was that randomized sequence, somehow concealed from those who will be doing the assignment to protect against selection bias. I'll explain a little bit about that in a minute but those are two separate ways that we protect against selection bias. Random sequence generation which is usually done by computers this days and you can do it other ways too, and then concealing that sequence from those who are doing the assignment. Next, we're interested in information bias, and that is getting the correct information from the patient, from the doctor and recording it correctly, accurately. Reflecting what actually was done on a form of some sort. Now this is done by masking, or blinding. Those are two words meaning the same thing. I tend to use the word masking, because I do a lot of vision research, and we don't like to use the word blinding to talk about this, but you may be far more familiar with the term blinding. So at least three different groups can be masked in a randomized trial or in a trial. The patient, him or herself, can be masked into what intervention he or she is getting. The person, giving the caring such as the doctor can be masked as the treatment being received. And those who are assessing the outcome can be masked. This protects against information bias being a part of your study. For example, if I know I'm getting placebo I might be more inclined to say that I'm experiencing a bad outcome of some sort, or a side effect. I know why I'm getting the test intervention and my carer, my doctor, might also be inclined if he or she knows I am getting a placebo to get me another drug, because I couldn't possibly be getting better if I'm getting the placebo. Let's say an outcome assessor is looking to see how well I see after having a particular procedure on my eye. That outcome assessor might push me a little harder to see more lines on the Snellen eye chart if they know that I haven't had the procedure. Or if they think I have had the procedure and want me to see better with the procedure compared to the control group. So you can see how it's important that the patient, the person giving the healthcare, and the outcome assessor, are protected against bias by a masking, and not knowing what sort of treatment the patient is getting. And then finally we want the analysis to be unbiased. There are a couple of different ways to do that,one is we want to use what's called an intention to treat analysis and you've probably learned about this in your first year epidemiology and statistics classes. What that means is that once randomized always analyze and that very first analysis in a randomized trial is people assigned to a certain group, stay in that group. They don't switch groups no matter what they really got so that the analysis is unbiased. And that's called an intention to treat analysis. We also want to make sure that we use pre defined outcomes and not switch outcomes depending on what we find. If we use pre defined outcomes that we defined before the study has even started then our readers and those assessing our results can feel more assured that we haven't switched the outcomes based on the study results. So those are the three types of bias we'll be covering, selection, information, and analysis bias and where they actually have an effect during the process of not only assigning patients to treatment, but on assessing the outcomes as well. So let's talk briefly about random sequence generation. I did already mention that this is how patients are assigned to a particular intervention, and it happens before the participants actually know that they've been allocated. It happens in the background. In studies I've done, for example, it's happened a month or two before we actually started enrolling. The computer generates the random allocations just to make sure everything's right. And we check the programs a couple of time to make sure things are really randomly assigned. The benefits of random assignment is that they account both for known and unknown confounding variables. That is, in an observational study comparing two interventions, we know what to look out for. What might be potential confounding variables. But there are unknown confounders too. And we can't possibly make the group similar on those if we don't know what they are. So the advantage of randomization is that if there are systematic differences between the groups, and you have a big enough sample, theoretically, there's no difference between the two groups on both known and unknown confounding variables. That is, it prevents selection bias, a differential selection depending on which treatment you get. So do we know this exists? Oh yes, we know this from so many studies. I debated whether to put a study in here in this slide from a more recent study because it's being shown all the time, but. I use this slide from 1983, just to point out how long we have known that randomization really matters. This is an example comparing the coronary artery bypass surgery using both randomized trials and what are called quasi-experimental studies in this particular publication. What that means is observational studies. And this was a meta analysis done comparing studies that were randomized and studies that were not randomized and the authors looked at mortality in the medical group. One group was receiving medical treatment and the other group was received surgery, that is coronary artery bypass surgery, and when they looked at the differences in mortality, there was a much smaller difference in smaller standard deviation in the randomized control trials than in the quasi expirmental studies. Look, it's 4.4% versus 13.8% mean difference in mortality. And a bigger standard deviation for the quasi experimental group. What does this mean? This means it's probably selection bias at work. That is, by randomizing the patients and not knowing which patient was going to be assigned to which treatment we are avoiding confounding by both known and unknown variables. That's why, when you use the quasi experimental design, you see a big difference between medical and surgical treatment. And you see a much smaller difference, probably one that doesn't make much of a difference at all, between the patients who got medical treatment and those who got surgical treatment. So does randomization protect against bias? I used to think oh well, we can just take the observational studies and use a multiplier of some sort, if we knew that, let's say, their estimate of effect is, on average, three times greater than the estimate of effect you'd get for randomized trials, but that doesn't really work. This is a systematic view of meta analysis of studies that have compared the results when you use randomized trials versus when you use observational studies. And as you can see from these estimates, this is the mean difference here of zero. Sometimes the randomized trial has a larger effect, and sometimes it has the smaller effect, which is what we expected. And often there is no effect what so ever, that is the 95% confidence interval crosses zero. And so there's no consistent difference between randomized trials and observational studies, so there's no such thing as a multiplier we can apply. We think that randomization protects against selection bias but we can't tell what direction the association will be in. And so there's no way we can really say what the size of the effect is. What we do, though, to protect against the possibility of selection bias, is that we say that when the assignment is unpredictable, there's a low risk of bias. And unpredictable methods for assigning the treatment to one group versus the other are use of a random numbers table, such as you might have on the back of your statistics book. Computer random number generator that's computerized some sort of stratified or block randomization, which probably uses the random number generator using some fancy statistic for stratification or block randomization. Or even a coin toss is considered low risk of bias as long as no one cheats. Now I have to say that I might not consider that low risk of bias because of the tendency to just throw it another time or throw the dice another time. But theoretically it is as good as randomizing using a computer. They're also methods of assigning treatment that is of the generation of the treatment assignment, that are considered predictable and put a study at high risk of bias. That's what's in red here on this slide. Those methods are considered, some of them, quasi random, this is a different use of quasi random then we saw on the other slide. So that's confusing justifiability. For example you might see, we assign patients by their date of birth. Those who were born the even year got X, and those who were born in a odd year got Y. That's theoretically random, the trouble is you can fool around with it. The day of the visit, the patient ID, whether it's an odd number or an even number. You can use alternation. All those methods are meant to be as good as randomization, but the trouble is they can be predictable also. So they aren't considered good methods to assign patients to a treatment, and then there are totally non random methods. For example, a patient or participants or clinician can make the choice as to what intervention the patient is to get and this non random method probably puts the study at high risk of selection bias. For example, with a new treatment, we might not put sicker patients in the new treatment group, we might put the patients who are doing a little better in the new treatment group. Not because we're bad people or deliberately trying to skew the results, but it's just human nature, how we might assign participants to a particular intervention. Now, allocation concealment. This is a complicated concept, and one that people have a lot of trouble with. They often mix it up with masking or blinding, but it's not the same. Allocation concealment happens at the start of a trial as the patients are being allocated to their treatment. So when you recruit a patients to a study you don't know ahead of time how they're going to be assigned and to what intervention. And you shouldnt be able to know until exactly the moment when they're assigned to that intervention. So here's an example on the right hand side of something that believe it or not, does happen. I might have a list that was randomly allocated of assignments c or d. And I have patients who come in, in a particular order. And the patient who comes in first gets allocation C. The patient who comes in second gets allocation C. The patient who comes in third gets treatment D. So if this is up in the wall, I can see that my fourth patient is going to get treatment C. Now if I believe that treatment C is in fact the new treatment and treatment D is the old treatment, I might say to person number four, who happens to be the husband of my next door neighbor, I might say why don't you sit down, I'm going to take Mr. Smith next, who's actually number five. And make sure that number five gets C, because I don't believe C is as good as D. And then, my next door neighbor's husband is going to get Treatment D, because they've switched places. I can see what the treatments are up on the wall. So even though these treatments have been randomly allocated, that is the order has been randomly allocated, I can see what comes next. That is, the allocation is not concealed. And in that way, why did you even bother to randomize? Because I can see what's next and we could have selection bias. That is patient's assigned to a treatment based on some characteristic that they have. Does allocation concealment make a difference? Yes. This is an area where people really are worried about the possibility of subversion of randomization. And skewing of the results. This is a systematic review that was published in 2008, by Pildal and his colleagues. And what they did is they took various studies that examined whether randomization and allocation concealment made a difference in terms of the size of the effect. In the meta analysis. Now each of these studies, and you can see that there are seven of them, looked at another group of studies. For example, Schulz looked at 250 trials and got a ratio of odds ratios that is,what would be the odds ratio that you get if you had allocation concealment that was adequate divided by trials that had allocation concealment that was not adequate or was unclear. And what they found, on average, was that trails with unclear or inadequate concealment showed a more favorable effect of the experimental treatment. So this is very worrisome. And for this reason people advise that you either not include or do something in your analysis that takes account of all the trials that you've included, if you do include them, that had inadequate or unclear allocation concealment. In the same year, Wood and his colleagues examined, a little bit more closely, where there might be a problem with inadequate allocation concealment. And what they found is that adequate allocation concealment makes the biggest difference, when one is examining subjective outcomes and outcomes that are not mortality. Presumably, they are more subjective. For example, breast cancer mortality, prostate cancer mortality, cardiovascular mortality. Assigning that cause of death. Probably is somewhat subjective, and when you have subjective outcomes, it appears that inadequate or unclear allocation concealment is a problem in terms of potential bias. What they found is that studies that are inadequately concealed had a more beneficial odds ratio than studies that were adequately concealed. So this is where some of the focus has been. That if you're looking at mortality,all cause mortality, then perhaps, the allocation concealment is less of a potential problem than if you're looking at other outcomes. Now most studies do look at outcomes in addition to all cause mortality. So one could correctly surmise that allocation concealment is an area of potential risk of bias in any randomized study. So what we consider a low risk of bias for allocation concealment when we're examining a trial that we consider possibly to be included in our systematic review and meta analysis is if there's central allocation. That is, if one has to call into a coordinating center by telephone or dial in using the Internet, or perhaps going to a central pharmacy and asking for the next assignment for the patient. And this is why some people confuse allocation concealment with random generation of the sequence of assignments. That is, maybe the assignments are done at the coordinating center by a computer. But the allocation is done by calling in by phone and finding out what the next patient is assigned to. Now some studies use sequentially numbered opaque envelopes, and some use sequentially numbered identical drug containers. These are also considered fine if you want to label a study at low risk of bias due to allocation concealment. However, there are some studies that you would want to consider at high risk of bias. That is, allocation was predictable. The random sequence might be known to the staff in advance for example, day of the week or year. The envelopes or whatever packaging is used may not have all the safeguards. That is, envelopes may not be opaque. You may be able to hold them up to the light and see what the next assignment is. Any kind of predictable or non random sequence puts a study at high risk of bias due to failure to conceal the allocation adequately. That ends section B.