Welcome back to the class. What we've decided to do in this course is use the evaluation of iCCM in Ethiopia as a way to give you a little more in-depth information about what actually goes on in an evaluation. What we're doing today is we've got three of the principal investigators that plan and ran the evaluation of the Ethiopia, that I'm going to have the opportunity to ask them questions and let them talk a little bit about what were the difficulties, what made it special, what made it hard to run the evaluation in Ethiopia? First I'm going to ask each of them to introduce themselves and explain their particular role within the evaluation. Luwei. Hi, dream students of Johns Hopkins. I'm Luwei Pearson. Currently I'm the Deputy Director of the Global Health Program of UNICEF. When this evaluation happened, I was the Chief of Health, in UNICEF Ethiopia. I worked there for six years. It was a great pleasure to team up with Johns Hopkins team to do the evaluation and to support program implementation based on the findings of the evaluation. It's great to have you back to UNICEF. Hi students. My name is Tedbabe Degefie, and currently I am responsible for the Global Maternal Newborn Health Program in UNICEF. During the evaluation time, I was responsible for the implementation of integrated community case management of common childhood illnesses. I collaborated with Agbessi and other colleagues in the design of the evaluation and also in setting up the implementation of the program itself. You and I were in Ethiopia, now we are both here. Yes. Thank you. Hello, my name is Agbessi Amouzou. Welcome back to the course. I was the principal investigator for the evaluations from the Johns Hopkins side, and working with colleagues in Ethiopia to implement the evaluations. Listen, let's start Luwei, our students have all seen common overview of the evaluation. But one of the things that struck me is the number. Could you describe a little bit what stakeholders and who were the major players in this. Because that seemed to be one of the different things about this evaluation and some. In Ethiopia, everything, every program, including the iCCM program is led by the government, because there are strong leadership of the ministry of health, to the primary health care system. This evaluation was funded by the Canadian Catalytical Initiative, aiming to reduce child mortality in the world. They invested $100 million globally and the Ethiopia was one of the focal countries. The Oromia region, one of the eight regions of Ethiopia, the biggest region of the country. Seventy-five percent of the population living in Oromia region, was chosen as the evaluation side. There are also several implementation partners to support the government of Ethiopia, such as Save the Children, USAID, the Save the Children UK, USA and and the Italy also, L10K, JSI. There are many stakeholders, but all under government leadership and coordination. That's just a description. Also, in Ethiopia the we have the Federal Ministry of Health and the Regional Ministry of Health. Can you talk about the differences in there. Tedbabe your from Ethiopia? Tell us about the federal system of the eight regions. The Federal Ministry is responsible for the overall policy, normative, and also guidance, and also large procurement when we come to the health system. Then there is the next level, is the regional health bureau. With that, we have nine regions and two spatial cities, so in total we have 11 regions. Then from the region level, next comes the zone. Each region will be divided into different zones. Then the lowest level will be the district. Of course the district also will be divided by villages and where the health extension workers, or the health of posts are located. On average between five to 7,000 population, will have one health post staffed by two health extension workers. This is basically the three or four-tier health system of Ethiopia. For this evaluation, the federal minister for health agreed that the study will take place in Oromia region. The regional health bureau director, the head of the bureau, was directly involved with everyday decision making and the support of the evaluation. Let me ask you, with all these different groups like The Implementers, PSI, and Save the Children, when you planed this, did they all agree on what was going to happen, or did you guys have to spend lots of effort to convince people to play nice, or go along together? Was that especially difficult? It has been an interesting process. I remember a conversation with Dr. Cassetta, the minister of health at that time, after the policy breakthrough of letting community health workers to treat pneumonia at the community level using antibiotics. I asked him, now there is a policy breakthrough, and that was 2011. We are three, four years just behind the target of 2015 where we have to meet MDG 4 target to reduce child mortality. I said, "Would you go fast to meet MDG 4 target, or do you want to go slow to build national process?" He said, "I want both." But I really want to meet MDG 4 for Ethiopia, because there are many children die from avoidable causes. Because the government wants to implement the program fast by involving the partners, so that's where the L10K, Save the Children, and IFHP came in with the government's support. There was never a problem to coordinate effectively under the government leadership. That was key to keeping it all together. As soon as the government agreed to the evaluation, then it's like marching orders for those NGOs and partners to work. Also L10K which was a project from JSI, and Save the Children, and IFHP, which was the Integrated Family Health Program, they all had some cooperative agreement with UNICEF and the government to implement quality ICCM in the country. I have to step back and say, Ethiopia had some opportunities for this evaluation. The first is that, there was already a program on the ground through the health extension workers, that were managing malaria and diarrhea. But then, this new policy to allow these community workers to also treat pneumonia was enacted in 2009 or so. The decision to really revamp the ICCM and roll out nationwide rapidly has been made, and also interest in learning quickly what were the early result of this scale up, in order to support the rapid expansion of the program to the country. In some way, all the stakeholders were behind in this evaluation. I just would like to add to what [inaudible] just said. Just to give a background how policy change happened. The Health Extension Program started in 2003, 2004, as the government's flagship program to implement primary health care in rural Ethiopia. The primary focus is just health promotion and promotion of empowerment of communities. The Prime Minister wanted it to be a drug-free program. Exactly. They are modestly trained, health extension workers, two per health post for about five to 7,000 population. We thought, this is a missed opportunity for Ethiopia, those workers not to treat pneumonia, which is the single most important killer of under five children. At National Technical Working Group level, all the partners Dr. [inaudible] mentioned, we're really advocating. We were really presenting local evidence, global evidence, and taking partners for learning visit in Nepal. A lot of advocacy has been going and then suddenly at the end of 2009, that policy was changed like that. Then the next day the ministry called UNICEF and said, "I want immediate implementation. Immediate." At national scale. At national scale. That is just the background. Well listen, in the course we always talk to people about how important it is to have an impact model develop. You've got this long history of things that they're already doing and then you're making a sudden shift so how did you guys handle that development of an impact model for your evaluation? The iCCM evaluation, as Louis mentioned, was a global thing. With the catalytic initiative there are a number of countries that were implementing the program and there was interest to really evaluate the programs in those countries. The main question that we were up to answer was, really can we scale up? Can the country scale up the iCCM program rapidly to reduce child mortality, rapidly to accelerate the progress toward MDG 4. That was the main question. Now, the iCCM program as planned was to train, the premise really train lay workers, deploying them into community and having them identify illnesses and treat the illnesses. That was the premise for this evaluation. We set our impact model based on that. A national plan has been developed. The program has developed a detailed national plan that included how they're going to train the station workers, how they're going to procure the drug and deploy them, how they were going to supervise and monitor them, etc. We base our impact model first through a detailed review of that program. In order understand the sequences, we had to go back and run some of the Live Saved Tool, that you've heard about, to understand the number of lives that will be saved based on the coverage level that were being not targeted, etc. As I mentioned, the program in Ethiopia was special in the sense that we had the malaria and diarrhea that was ongoing and pneumonia has been added. Unlike in other country like Malawi and others where really they didn't have all these things in the community before, Ethiopia had some background information that needed to be taken into account in our impact model. We went through that dielectric to understand what is on the ground, what the implementation plan was, and then come up with our impact model from there. It's really nice to have the evaluation designed at the beginning on the program as a integral path instead of a afterthought. We learn a lot from the evaluation. I remember, in the beginning when Professor Bob Black and you come to Ethiopia, and [inaudible] and as we were very busy with the implementation, preparation, training, supplies and this and that and we thought, look there's international evidence. There is also the evidence there's SNL generated pneumonia treatment in different countries, and Ethiopia also had some local evidence. Why do you need another evaluation? Let's just do a good job. Right? Right. Then the many interesting findings of the evaluation really informed how we're going to improve the program. Right. Yeah. Also, if I may add on that, for program implementers the value of evaluation is not apparent initially. I was one of the skeptic, self about this evaluation, we were in such urgent mode to accelerate implementation. Also, we know if a child has pneumonia and needs antibiotic, untreated pneumonia kills children. It's proven intervention. What are we evaluating? I even expressed my skepticism earlier, but on a hindsight now, or even while we are implementing the evaluation it became really clear and informed us in many ways to address major gaps in the implementation. In your study, you guys, would you tell me a little bit about what you tried to do in terms of measuring quality of BIM, implementation or health systems or quality of care. How did you guys handle that? I can start and [inaudible] here or [inaudible] can jump in? One principle we had in setting up this evaluation is not to do an evaluation that's it designed in a way that we come and collect data, analyze them, and provide result at the end and say, "Okay, this working? It didn't work. We wanted to make sure that the evaluation is also supportive of the program, because ultimately, it is a successive program that we all want, we want to save lives. It's was important in our design to make sure that we have some midterms results of the program in order to provide feedback to implement us on the roll out of the program, on the quality of care of the program, so that they have the opportunity to strengthen the program. That was part of the design itself. That's how we came up with designing the quality of care, implementations trends, and so on in the program. Let me follow up on that a little bit about, you're saying that one of the purposes was not just to get at the end, know whether it worked or not, but also get information that would help improve the program. You want to talk a little bit more about how did you try to collect data on that, and what were some of the key indicators for you? Which one of you? I can start. We mentioned in many aspects of that program implementation. One is the implementation strength. Whether the health posts and the health centers will be ready to deliver treatment of Common Childhood Illnesses in terms of trained manpower, supplies and supervision. All these three aspects needed to have different indicators and different tools. For example, to assess the health posts and their health centers readiness, we did a rapid health facility assessment. We modified the WHO health facility assessment tool that looked at the different aspects of this, including direct kids observation. We have to assess also the quality, in addition to direct kids observation, we needed also to look how do health workers perform when they're not observed directly? How does their usual care looks like? For that, we had a registration book where they record their cases. It's a kind of patient case record. The supervisor, and also the evaluation team could go and see whether it's complete, whether it is clear, whether the documented entry is consistent. For example, if they recorded a respiratory rate of 30 in a 12 months child and they classify as fast breathing and pneumonia, it is inconsistent, it is incorrect. When they classified pneumonia and they don't give Cotrimoxazole, then there is inconsistency. This was a Proxy assessment of quality. Also the overall community perception, and the demand, and how community also perceives the care and the quality of care, we had also qualitative data collection to assist that. I would say we have used different sources and different indicators. I feel from the register you just mentioned, we were able to figure out the rate of correct assessment, correct diagnosis, and correct treatment. But we're not able to answer the question on mortality rate reduction. Yeah, absolutely. Give us two options through the evaluation; a randomized controlled trial, as well as a list modeling exercise. Let me ask you, that leads me into this whole question of in a class we talk about defining priority questions, and setting the designs. Because at least for me, I think of you ended up using an RCT in this study, which, let's be honest, that's fairly rare in terms of evaluation. If you're trying a new drug or something like that, we have to do it for an efficacy trial, but for evaluation that's unusual. We want to talk a little bit about what the priority questions were, and how you chose to do an RCT? The priority question was what I mentioned before. Are we able to train health workers, lay workers, and roll them out to the community to manage pneumonia, malaria, diarrhea, and nutrition, and so on? Can we do that to reduce mortality? That's the prime priority question. Now, as we mentioned, there was really some eagerness in the country to roll out quickly, but also to understand the early result, not only in terms of the program implementation, but in terms of mortality reduction. There's also interest in having a very rigorous way of saying that. Ethiopia presented a unique opportunity for implementing a randomized design, especially in the Oromia Region, where we are able through several discussion with the Regional Health Bureau, the Federal Ministry of Health, the implementing partners to identify two zones in the region with enough district called woredas that can be randomly assigned to intervention in comparison areas. Also to avoid sensitivity around this. Note that the language was to have phases. Phase 1 and phase 2. Exactly, because it was difficult to roll out at once on to the 31 districts in these two zones. It's easy to convince people that, well, let's have this in phases. The first phase will be our intervention areas and the second phase, which will be held up for about 18 months, will be the comparison areas. But we had 18 months to wait before roll out in those woredas. Although we had the opportunity for the randomized controlled trial, we couldn't hold up the comparison areas for longer than 18 months, which was what was tolerable for the implementers. If you are running a research program, perhaps you would have designed the baseline and end line with three years or five years in between to have a complete cohort of children and allow your implementation strengths to kick in. But in a real setting, initially the government said, "I'll give you six months." Then it said, "Okay, we'll give you 12 months." We said, "Look, at the end of the 12 months, we have barely finished the baseline survey." We really negotiated hard for 18 months, so we couldn't do more than 18 months. That's why the effect of mortality reduction is also quite limited. It was quite limited during the 18 months period. We were hoping to have a third measurement point to document the more mature phase of the program in relation to child mortality reduction. That's right. How did we do that? We had a randomized design and the selection of the two zones was based on the presence of the partners that are strong, and able to roll fast, because we had 18 months. Once we do our baseline data collection- The clock starts ticking. Yes, so they have to roll out quickly. They cover the entire intervention districts with ICCM. They were able to do that successfully within three months. Over more than 80 percent of the worldwide [inaudible] out there doing the work, so that we were able to measure mortality on the 18-month period. But then the 18-month periods was still quite a challenge. Yeah, because I think of so many programs that we work with. You're lucky if in 18 months, you've actually started anything. After funding much less, I want to measure impact after 18 months. I guess you're in special circumstances. This was special. The partner was strong and UNICEF was holding their feet to the fire as well because they had to move quickly, and they had the money to make sure that they have the drugs. They have all the procurement process in places to make sure that they are sending the drug to the health posts and replenishing them. There's also quarterly supervision meeting. Yes, a review meeting. Review meetings to make sure that the HEWs understand what they're doing and refresh on the program.