[MUSIC] Adaptive designs have increasingly become popular in clinical trials. The food and drug administration in the US, defines an adaptive design as a clinical trial that allows for prospective lee planned modifications. To one or more aspects of the design based on accumulating data from subjects in the trial. There are two key components here. First, the changes must be prospectively planned. You must pre-specify both when you want to make a change and what the change will be. This is not an excuse to make ad hoc changes based upon your sense or feel for how things are going because that can often cause bias. The modifications that you can make can take many forms and we'll discuss a few examples of them as we continue with this segment. Why should we consider adaptive designs? Why do we want to make changes? Well there are many different types of benefits that are possible. First, they can be more efficient. From the statistical perspective they can increase power or the ability to target subgroups. From the administrative side they may have a smaller sample size, require a shorter trial and have less costs. There are also ethical issues involved. Being able to stop the trial if there is harm or no chance of obtaining an answer to your question commonly referred to as safety or futility, then you should not ethically continue the trial. Sponsors like adaptive trials because they like the flexibility. And it can also help with recruitment. For example, if a participant has a higher chance of getting a new treatment or knows that it is a play the winner design and they will be guided towards a treatment more likely to be effective. However, there are also some disadvantages, adaptive trials are not always better. They are very analytically complex. So simulation studies are key for evaluating these designs. There is also a risk of increasing the type one error and you are relying on the results of your interim analysis when you have only a small fraction of your population for making decisions about changes. If your population or the results change over time, this can be a real issue also, it's often difficult to explain what was done. It's also important to be able to access the data quickly so that you can make the determination of whether or not a change should occur. Changes in the population over time may make inference difficult. How do you interpret the combination of two different groups of individuals that were recruited to your trial, one group early, one group late and finally there's the risk of introducing bias. For example, if information about how the trial is going leaks out through the changes that you make, it may be harder to convince people to enroll in the trial or to recruit people for the trial. So what's the structure of an adaptive design? We start with our initial enrollment just as we do with a non-adaptive trial. Then at a pre-specified point we perform an interim analysis based upon the results of that interim analysis we have three choices to make. Do we stop the trial? Do we continue as is or do we make some change to the trial design? After that choices made, if we have not stopped. We then continue to the end of the trial. Or to an additional interim analysis where a similar set of choices would be offered. Let's look at some common adaptations. The first example is to look at pooled data from the trial so you put together all of the data from all the treatment arms, not comparing them by treatment but just looking at them as a whole. This is often used for sample size re-estimation. For example, you might examine the assumptions you made when making your sample size calculation such as the event rate or the variability of your measurements. If those assumptions were incorrect, you could update them based upon the data you've collected to recalculate your sample size or change the length of follow up. Looking at the pool data can also be used to determine whether or not you need to make changes to your analysis plan. Again you can examine the assumptions that you made. For example the distribution of the data and then if necessary make changes. For example in your modeling choices. Switching from a parametric model to a non-parametric model. We can also look at the baseline characteristics. The baseline characteristics can also play a role in determining whether or not adaptations should be made. Adaptive randomization based on baseline characteristics, examines the balance in key baseline characteristics after each participant. If there is imbalance, we change the allocation ratio in order to try to gain that balance back. For example, in a two armed trial, if one arm had a higher proportion of men than the other arm, the next male participant would be more likely to be assigned to the arm with fewer men. We can also examine our eligibility criteria and look to see if there are any barriers to recruitment. If we identify barriers to recruitment that would not cause problems such as bias to our study, we might modify those inclusion and exclusion criteria. We can also use outcome assessment and comparison between the treatment arms to decide whether or not to change the trial. A prime example of this is in a dose finding study. If toxicity is an issue, we may escalate through a range of doses showing each one is safe before continuing. We would look at the toxicity for that particular treatment level and then decide whether or not to change the dose for the next participant. Or perhaps to increase the number of individuals treated at that dose if the safety were not clear. Similarly stopping rules are an example of an adaptation that is commonly used. We have a pre planned interim analysis and then based upon that analysis, we decide whether or not to stop the trial. This is a change in sample size. We are recruiting fewer participants than originally planned. Let's look at an example, the multi center steroid treatment trial must perform the sample size adjustment. The original design was for a two-armed superiority trial comparing the implant two cortical steroids with or without immunosuppression. The primary outcome was changed in visual acuity at 2 years. We made a number of assumptions when designing the trial, we assumed that we would need to detect a difference of greater than five letters to say the groups were different. The standard deviation of change in visual acuity was assumed to be 3.2 lines. Since participants were allowed to enroll both of their eyes in the study, we had to account for the between eye correlation In this case, we assumed that it was 0.6. We used a two sided type one error rate of 0.05 and a power of 80%. This led to the determination that we would need 400 participants 200 in each arm. However, we ended up making a sample size adjustment. The motivation for this was both the fact that our recruitment was slow and also the fact that we had updated information, both from internal and external sources. Let's take a look at those assumptions that we made. Upon review it was determined that differences of 1.4 to 1.6 lines is required in order to change clinical behavior. This is larger than the difference of one line that we originally tried to detect. We also looked at the one year data from the must trial as well as another eye study. The longitudinal studies of the complications of AIDS else. Oka. Data from those two studies showed that the standard deviation for change in VA was 3.6 only slightly larger than the 3.2 we had first assumed. However, the between eye correlation was only 0.4 which was smaller than we assumed. So each eye was providing more information than expected. In addition, the proportion with bilateral disease was 67%, which was larger than we initially thought. Our type one error rate was unchanged. However, if we modified our sample size calculation based on these assumptions. We would only need 250 participants 125 in each arm to achieve 98% power much higher than the original 80% that we aimed for. The reduction in sample size allowed us to successfully complete the trial and answer our questions. There are a number of less well understood adaptations which are gaining popularity. They typically rely on interim analysis comparing outcomes between the treatment arms. Examples of these include response based, adaptive randomization, enrichment design, sample size adjustments, and changes in endpoint selection. Other areas where further research is needed is what happens when multiple study design features are changed or when a non-inferiority study is being used instead of a superiority study. It's important to fully evaluate the characteristics of all of these different types of designs before implementing them. Response based adaptive randomization assigns individuals to treatments depending upon the outcomes already observed. In a sense, we play the winner. We're more likely to assign a participant to a treatment arm that is doing well than to one that is not doing as well. However, there are problems with this type of design that must be considered. You have to be able to ascertain the outcomes quickly in order to apply them to the next patient. It may actually require a larger sample size than a standard parallel design. The analyses are also quite complex. In some cases, the community may have difficulty believing in the results. For example, if the play the winner design directs almost all of the participants to one arm and not the other. They may feel there's not enough evidence to truly compare the two arms. There's also the potential for chronological bias. For example, one treatment never getting used. So it may be helpful to have a run-in period to obtain a minimum number in each group before going to adaptive randomization. Enrichment trials are also quite popular. We use them when we suspect that the treatment effect differs by sub population but have minimal information to support this theory. Possible subgroups could be based on disease severity, the genetic pathway or the targeting of agents. Our goal is to restrict enrollment to those who would benefit. This allows us to observe a larger treatment effect and also increase the sample size in the population where the treatment works. To implement this, we randomized to our initial population. Then we perform a pre-planned interim analysis and make the decision on whether or not to modify the population. If overall there's a strong treatment effect, we continue with our initial population. If the treatment effect is stronger in one subgroup, we would limit the recruitment for the rest of the trial to that subgroup. Here's some things to consider about adaptive designs. First, most trials make some changes. We may have been doing adaptive designs all along. Second, it's important to remember that adaptive trials are not an excuse for sloppy design. You must prespecify the adaptations, create clear protocols and analysis plans and evaluate the adaptive designs using simulations. It's important to have fast access to the data that is used to make your decisions. What are the risks? Could there be bias? Type one or two error, inflation information leak, distrust of the results. All of those factors should be considered when determining whether or not to use an adaptive design. It's very important as well that all partners understand the design choice implications. This includes regulatory agencies, such as the FDA in the US monitoring bodies, such as a data safety monitoring committee. The sponsor of the trial clinicians and participants if they don't understand what is happening in your trial, they probably won't trust the results. I hope this gives you an introduction to adaptive designs, although we have only touched a small portion of the designs and issues that are involved. [MUSIC]