Hi, again. This is Kay Dickerson. Now we're moving into Section C, where we're going to talk about information bias in doing the systematic review. So, what are our particular concerns as regards metabias in doing a systematic review? I would say, in terms of information bias, we're worried about the accuracy of our quality assessment or the risk of bias, the accuracy of the data abstraction that's done from the individual studies included in the systematic review, and how complete that data abstraction was. Let's talk first about inclusion bias, as a subset of information bias, that is, and I already mentioned it, if we know who the study investigators were on a particular study, or the outcomes that they assessed or what they found, does that, it affect what we include in our systematic review? If I know that the Canadian study of women 40 to 49, of mammographic screening showed a nulled result for mammographic screening in terms of breast cancer mortality, am I more likely to make up inclusion criteria that would ensure that Canadian study is not included in my systematic review? That's what we're worried about in terms of information bias, in terms of whether studies included it or not. Now, how about if I know the results of a study? Does that change how I abstract the data? Let's say, a study was actually included, and I know ahead of time the individual study that I'm abstracting data from had positive results. Does that change how I abstract the data? Am I more likely to think they did a good job? Am I more likely to look to make sure they examine multiple outcomes? Or maybe I'm more likely to look if they have negative results. And how about if I know the study investigators, or outcomes? Just as before, does it affect if the study gets in? Does it affect my quality assessment? There'd been numerous studies on other topics that we tend to think a study is better done if it has positive results. Or, we tend to think, there are results on this, that a study is better done if the author is a man compared to a woman. And so, these things are known in individual studies to affect our assessment of the quality of the study, would this also be true for those people who are working on systematic reviews and meta-analyses? There have been about five studies that have examined whether it's worth it to blind or mask the author doing the systematic review to who the authors are of the individual study. Their institutions, what journal their article's published in and their findings. So for example, let's say I'm extracting data for my systematic review from a series of ten different studies. If I can see where each study was published, am I more likely to think that a study that comes from a high-ranking journal is of higher quality than a study that comes from a low-ranking journal? If a study was published by one of my close colleagues, but it's really not a very good study, am I likely to give it a higher mark because this is one of my close colleagues, and I know that he or she does very good work, and so it must just have been a fluke that this got out somehow. So perhaps I'm influenced, knowing who did the study, what journal it was published in, what institution they come from, and their findings. Maybe that could influence me. So, it turns out that five studies have looked at whether you can mask the person doing the data abstraction for the systematic review to these elements, and to see whether that makes a difference in the data that they extract. So in the old days, what we used to do is something called differential photocopying, and it took forever. You cut up an article and you just present to the data extractor the title and the method section. And then the person would extract the data on risk of bias and what are called quality items for that study. Then we would compare the results that were extracted for that study, that was differentially photocopied, with results of a study extracted where there was no differential photocopying, and see if they got different answers. It turns out, really, all this blinding or masking of reviewers to who did the study and what they found really didn't make a difference. Although one study in 1996 did find that it made a difference to do masking, the rest of the studies really didn't so much. Maybe in specific cases it appeared to make a difference, but over all, a systematic review and meta-analysis of this particular topic showed that it doesn't make any difference to mask the authors to the author's institution journal and results before they abstract the data. This is good news for all of us because that differential photocopying took a long time and was a real hassle to do. So what this means is that you can abstract data from your individual studies without anything fancy. Just make sure two people do it, and then you can compare what the two people extract, and that's a good way to see whether they're finding the same results and to discuss any differences that they're finding. So what are some other issues that are related to information bias? Sometimes we're worried about whether the data that we get from a graph is not accurately abstracted. You may find, for example, as a matter of fact I know you will find, in some of the papers that you are looking at for your systematic review, that the only way that you can get data on an outcome you're interested in is from a graph. And the graph might be proportions, and you have to guess then, at what the numerator and denominator are. Well, that's scary to do, and a lot of what we do in systematic reviews is scary and involves a judgment call. But this is one of the scariest, where you're assuming a numerator and denominator and all you have is that graph. We do have some software now, I'll show you, that can do this and apparently is pretty accurate. You can also go back to the investigator in that email and say this is what I deduced, looking at your graph. Is it correct? And some of you might be able to get an answer. Another place where information bias might be an issue, is whether the experience of your abstractor is a factor. So for example, when we're doing systematic reviews in our Cochrane Eyes and Vision group, we hire graduate students, and we wonder whether someone who's brand new at the task isn't as good, isn't as accurate as the person who has a lot of experience reviewing this type of study and extracting data. Another question about information bias might be, and I mentioned it briefly on the previous slide, is it necessary for two people to abstract the data and then to compare what they found in order to assure that we have reliable data from that particular study? I don't think we have complete data on that yet, but most of us do duplicate abstraction and certainly for the results. And then finally, the question of course you have by now because of what I've said in previous sections, can we rely on what's in the publication? And unfortunately, the answer is probably no. Whether there's a bias or not is to some extent open to question, but I think we know they're reporting biases, and so we probably will have to look in more than one place to see if data agree in those gray sources, in FDA database, clinicaltrials.gov, as well as the publication. So what I'm showing you here is an example from JAMA, where there was a correction made because there was an error in data extraction and analysis. Now this is probably pretty rare, that someone published a correction and we're glad for it of course, but errors are made all of the time. I don't know about you, but I make errors and I am very grateful when there's another person doing data extraction, because we get tired we read wrong. It's just human nature, and we're human beings at the end of the day. So, I think we have to accept that errors are made and build in some protection for our systematic reviews. Here's another example I mentioned, which is how you can extract data from a published article using a little piece of software, or even just using your ruler. So, if you're interested in this and you come up against this problem, there are sources you can use to try to minimize information bias. And then finally, the question that I mentioned about whether experience affects accuracy. These are some data from a 2009 study where people with less experience were compared with experts, let's call them. And to see whether they don't do as good a job, make more errors than the people who have more experience. What the authors found, interestingly enough, is that the error rates were similar and it didn't depend on how much experience the person had. However, inexperienced people took longer than the experts, but the error rate was the same, and so that's reassuring for us. Certainly bears replication in another study, but it was initially reassuring, at least, that the experience does not affect accuracy. Finally, I've already mentioned a couple of times, that one of the ways we can protect against errors in data extraction is to have two people do that data extraction and then compare what they've extracted. So what are the possible ways that you could extract data for your systematic review? The first is you could have one reader go in there and extract the data onto a form. The second is to have one reader go in, extract the data, and a second reader come in, look at what the first reader has extracted, and say, yeah, I agree with it or no I don't. That's called single data extraction and verification, and that's what was done in the study that I'm presenting on this slide, the Buscemi study published in 2006. And then the third thing that you could do is you could have two separate individuals go in and extract the data without talking to one another, or knowing what the other person did. And that's a completely independent data extraction. Those two people who extracted independently get together, then, after their extraction, look at where there are differences in what they extracted, and they decide between them what is the correct response. So what Buscemi did is he compared the single extraction plus verification with double extraction, and he found that there was less inaccuracy, and overall, a lower error rate with the double data extraction. It's only one study though and some people are concerned that this is much more expensive and time consuming than if you just do a single data extraction with verification. One of the solutions that was suggested in the IOM Systematic Review Standards was, perhaps you could do data extraction for the results that you will be combining in your systematic review. And for less important data such as the investigators, the journal, which was published, and the dates, this could be done by a single extractor plus a verifier. That was just a suggestion for those with limited resources, and I think the jury is still out on whether one needs to do double data extraction. In the systematic reviews that we do in the Cochrane Collaboration, we do double data extraction, and I hope you will also in this course. So that concludes the section on information bias. In the next section we're going to talk about bias in the analysis and ways that we could possibly prevent or address it.