And now we will begin the final section of the design lecture and this section is on adaptive designs. Up until this point in the lecture we've discussed why it could be called fixed design. And for fixed designs the design characteristics and features such as the sample size, and the hypotheses of interest, the outcomes, and the treatment groups, those are all established before the trial starts. That's not to say that there are never changes to a trial with a fixed design once a trial starts. In fact, changes do have to be made because of information that you learn during the trial, either from data within the trial or from relevant data that is learned from outside of the trial. However, some trials are designed to change depending on what's observed in the trial. And these are called adaptive designs. And in this section we're going to cover some features of adaptive designs. There's been great interest recently, particularly in the pharmaceutical industry, in clinical trials that are designed with adaptive features. Because by adapting to what's happening in a trial they have the potential to be more efficient and more likely to demonstrate an effect of a drug if there is one. To respond to this interest, the FDA has begun drafting a guidance document for industry on the use of adaptive trials. The FDA has started drafting a guidance document for industry on the use of adaptive trials in the drug development process so that the adaptive trials can be used as part of the new drug application for submission to the FDA. In the draft of the guidance from February of 2010, the FDA defines an adaptive trial as a study that includes prospectively planned opportunity for modification of one or more specified aspects of the study design and hypotheses based on the analysis of data, usually interim data, from subjects in the study. It's important to note that, in the context of adaptive designs, we're talking about adaptations that are planned and detailed before data are examined. This is an important distinction, since, as I mentioned before, changes in steady procedures occur in more traditional, fixed designs, as well. But these changes are as a response to what's seen after examining the data. However, when we speak of adaptive designs, we are talking about designs that have pre-specified design adaptations. There are many potential adaptations in adaptive designs. Investigators can change randomization probabilities. For instance, one might calculate the probability of success or improvement in the outcome for the different treatment groups continually as participants move through the trial. And then this information can be used to adjust the probability of being assigned to the different treatment groups so that the next participant has a higher probability of being assigned to the treatment group that's showing a higher probability of success. Another adaptation is a change in the sample size based on the accruing data. Group sequential methods, which are methods of stopping early due to benefit or harm of a treatment, and also methods for stopping early for futility have been around for some time. And they are some of the most common and well understood design adaptations. Also investigators might be unsure about the best visit schedule for observing outcomes. So they could specify a change if it was discovered that the length of follow-up was unnecessarily long or if they discovered that the length of follow-up should be longer and the investigators could also increase or decrease the number of interim follow-up visits. You can also change the treatment groups during a trial. In order to do this, you need to specify rules under which a new treatment can be added, or rules for when a current treatment can be dropped. You might want to change the dose or duration of one or more of the treatments. And you might decide to change the list of allowed or required concomitant meds. Investigators sometimes pre-specify that they may change from non-inferiority to superiority or vice versa during the course of the trial. And it's also possible to even change the eligibility criteria. For instance, if one subgroup is responding to treatment and another subgroup is not, there are enrichment designs where by you continued recruitment only in the subgroup that is performing well. Investigators also may choose to change their outcome measures or their methods of analyzing their outcomes. As I mentioned, calling a trial an adaptive trial does not mean that any change is allowed at any point in the conduct of the trial. There are certain principles that one should follow in the design of an adaptive trial. First, the adaptation trigger should be explicitly stated in the protocol. Second, the adaptation itself should be detailed. For instance, the investigators may want to change the sample size during the trial. When we do our original sample size calculations we have to make lot of assumptions based on preliminary data or data from the literature or sometimes even just from opinion of the investigators. We have to make these assumptions in order to estimate the sample size that we need to detect a given difference with a certain power. An adaptation that might be pre-specified in a protocol would be that after say 50 patients have been enrolled, the investigators will reevaluate the sample size calculations based on the data that have been seen so far and they can adjust the sample sizes needed to maintain their original power. Or the investigators could enroll from two sub groups until a certain point in the trial at which they could compare the treatment response rates in the two groups. If they find that the response rate was say three times as large in one group as compared to the others the investigators might choose to continue recruitment only in the sub group that's responding better to treatment. Many of you have probably submitted a protocol to an institutional review board or IRB, and so you're probably familiar with the IRB amendment process. In most trials with a fixed design, investigators have to submit amendments to the IRB in order to make design changes like the ones that we've talked about in the past two slides. However, in an adaptive design the IRB submission and approval includes the planned adaptations. So amendments are not required for adaptations that occur as planned. However if there's other changes that are needed, that were not described in the original approval, those would have to go back through the IRB for review and approval. So the advantage of an adaptive design is that it's more flexible than a fixed design insomuch as that design allows for changes within the specified list of possible adaptations. Adaptive designs have the potential to be more efficient than a traditional fixed design. And note that I say they have the potential to be more efficient, but they aren't necessarily so. And it's also true that some adaptations increase the likelihood of showing a treatment effect if there is one. There are also several limitations of adaptive designs. When investigators change design parameters partway through a trial, it can be difficult to interpret the overall treatment effect estimates. For instance, using an example from the last slide, if at the beginning of the trial we recruit from two sub-populations and then we adapt, after an interim analysis, to recruit only from the better responding sub-population, what is our treatment effect estimate? Do we exclude participants in the poor performing sub-population? How do we interpret our effect estimate? Also another disadvantage is that it has been shown many times that treatment effect estimates at interim analyses can be meaningfully different than the estimates that we get at the end of the trial. This lack of reliability of interim estimates can have considerable unfavorable consequences for adaptive procedures. An adaptive procedure that permits design changes such as an increase in the targeted number of study events based on interim data about treatment efficacy allows at least indirect information about efficacy and safety results to the investigators and to the sponsors and other people outside of the data monitoring committee even if the adapted procedures implemented by the data monitoring committee. Because the investigators know that the change has occurred. Finally, from a practical standpoint, adaptive methods can be hard to implement if the methods require quick access to data to say change randomization probabilities. This can be a problem since there's typically at least some delay in the measurement of the data and the availability of the data for analysis due to data entry lags or lags in data editing. And these adaptive protocols require extensive documentation of potential triggers and adaptations. Our final example of this lecture is of an adaptive design. The I-SPY 2 study is a collaborative effort of academic, government and pharmaceutical groups under the auspices of the foundation for the National Institutes of Health Biomarkers Consortium. In I - SPY 2, the investigators are comparing the efficacy of novel drugs in combination with standard chemotherapy. They are comparing novel drugs in combination with standard chemotherapy to standard therapy alone, for the treatment of locally advanced breast cancer. The objective of I-Spy 2, is to identify and improved treatment regimens based on the biomarker signatures of disease. In I-Spy 2 there are two arms of standard chemotherapy, plus five additional arms of a new experimental drug added to the standard therapy. Each experimental drug is tested in the minimum of 20 patients and a maximum of 120 patients. And after 12 weeks tumor tissue is collected surgically to assess whether or not the patient has a pathological complete response, which is the primary outcome measure. Regiments that have a high probability are being more effective than standard therapy within some biomarker signature graduate from the trial. And regiments that show a low probability of improved efficacy with any biomarker signature will be dropped. So new experimental drugs can enter the trial as others graduate or are dropped. I-SPY 2 also uses adaptive randomization. Drugs that do well within a specific biomarker signature are preferentially assigned within that signature. Candidate drugs for I-SPY 2 have been tested and found safe in phase one clinical studies. And they also have preliminary evidence of efficacy for breast cancer from preclinical or clinical studies. There's an independent group of experts that determines the list of new drugs that would be contenders for inclusion in this study. At the time of this recording, the I-SPY trial was not finished yet, but there has been a publication of the design methods and I've included a reference to the design methods paper on this slide if you're interested in more detail about the design. And this brings us to the end of the example of an adaptive design and also to the end of the lecture on clinical trial design. And in this lecture we've talked about various designs that can be used to evaluate an intervention's efficacy. Now you should be familiar with the features of the design and hopefully you'll be able to pick these out when you read reports of trials in journal articles or when you hear about a trial in news reports. So that is the end of the design lecture. Thank you very much for listening.