So in this segment, we'll give you at least acknowledge that line of the procedure to create a validation plan. This is not meant to be exhaustive. Please do not use his instructions, and go to regulatory agency and said we followed this wise and is not good enough. The goal here is to cover the principles involved and of course, you have to study the particular documents are part of that relate to your application as you go through designing validation study. Again, the usual disclaimer, this is simply pull one possible matter of how to do it, and it's a simplified recipe tune for the needs of an undergraduate class to illustrate the principles, not to give you every possible detail that you need to know. So here are some key questions to ask as you start getting a validation plan. The first one is, is there a current solution? Are we trying to do something better, cheaper? Something requires less expertise, or it's just a goal to create something that substantially equivalent. This is often was required for the 510(k) process in the FADA we talked about this in Week Two when we find a predicated with demonstrated our solution is about the same, equally good substantial equivalence. Same performance but cheaper is the easiest. We show that our performance the same and our device costs less, that's a very easy test to do. How does one measure better performance is a critical question. This is where we get into this hypothesis testing that we discussed in the previous week's lectures in mathematics, and if we have a new solution is good enough. This is a good enough new solution. And how does one measure good enough? And often, this means involves a comparison to the current Standard of Care. So if there is no alkaline, there's no software for doing this specific task. We have to ask how are this this condition diagnosed or treated right now, given the lack of tools, what is the standard of care? Can we improve on the standard of care? So we're not competing against an existing tool. We're existing competing against clinical standard of care, as it's practiced right now. So before we start validation, the job of validation begins that they need funding phase. You have to ask you users what it would take for them to believe that your software work. What type of evidence would they need to be convinced to buy your software? And you want to start beginning to convert this qualitative and subjective statements into justified objective and qualitative criteria. So if your user tells you well I need accurate high performance, and it too soft to be accurate, you have to start thinking okay, what does accurate mean 80%, 90%, 95%? Those are questions you have to start thinking about. And then, you design a set of experiments to acquire the necessary measurements, and this may have all the power analysis, and the randomization, and the statistical techniques we talked about in the previous section. In software validation, there are some additional constraints. In software validation, we use the finished software packages as its input. We're not using code anymore, we're using the black box, the finished final product. We're testing the software from the perspective of the user, and ideally the validation team, remember we talked about the separation, having a separate evolution team ought not have been involved in the implementation of the software. These are people who have nothing to gain ideally from that. And we want to test the software in an environment that much is the user's actual environment. Remember here, we're testing the solution, not the code. If you remember from the kid from Dr Patrick, you may need to test how the outcome performs, how the doctor performs when the use software, Jesus, and we're testing the whole process here. The actual software in Israeli cities and environment as possible, and how it helps an actual doctor clinician to do their task, or any other user depending on what the situation we're dealing with is. So let's give a couple of examples. The first example here, our goal is to enable the user to perform a standard task accurately and efficiently. So we have to define the task precisely. So it becomes a repeatable procedure. We get an image, we look at it with a circle, computer volume, something like that. We define accurately quantitatively. We have to get a number here, 80%. Now, for some things there are benchmarks accepted out there, previous devices, their guidelines, but we need to get a number. There's no way around this, and we need to define efficiently quantitatively less than two minutes, less than a minute, what would be acceptable? Those are things we have to figure out. Let's continue here. We'll find the user in more detail. It's a nurse to some computational experience using a standard PC, a very concrete example. We recruit a set of representative users. We ask the user to perform the task in a simulated but a very realistic environment as close to as possible. We analyze the data, and we show that the software meets or exceeds the expectations or needs of our users, that is one example. A second example, the goals were from a certain task better than a standard or existing method. Again, we do randomized study were divided into two groups, you have a group that performs the task using the standard method. A group that uses task using our own method, and then we have some kind of hypothesis testing that says that a new method of performs a standard method inaccuracy by more than 20%. That's the effect we're looking for, and the improvements would be statistically significant at some level of significance. So these are the kinds of examples that are often useful as templates for people to think about. These are very similar to how we design scientific studies, and here's one example of a new thing and one example of an improvement of a comparison to something that access. So we're going to walk through a template here, and walk through the steps in more detail in the next few slides. So let's look at what the template looks like. And what I'm going to present here applies to each question we need to answer to provide objective evidence for. So we have five intended users, you're going to repeat this five times. So this will be for each combination of user and use case, this is the usual scenario. However, you break your studies up to be, and the subsections will answer the questions, what. This is our goal, our hypothesis, you see the flow should build at the bottom. Why, why are we doing this? This was called the rationale how the experimental procedure, and finally how will the statistical data analysis. So let's examine each of these four steps in turn next. So the goal of the hypothesis, the what, you start with a statement, or a goal to demonstrate that the new technique significantly performs an existing technique by improving accuracy by 30%. That is our goal. This is our hypothesis effectively here. It must be specific and quantitative which makes it testable. Okay, if you don't have this number 30%, the rest of the vocabulary in that sentence is useless for your purpose. We need to come up with a number, we need to define what exactly we're trying to do. This is the goal of the hypothesis. The next step is the rationale, is a statement of why. Why is this a good thing? Why is 30% the number we're going for? And maybe with something like improving accuracy by 30% allows us to perform certain procedures, allows us to do diagnosis more accurate. This is critical, it feels like an afterthought, but unless you justify why you set the bar at that level. So think about this an experiment as high jump, you decide to say the bar, and you have to clear it, and as your judge whether the body is high enough, and whether you've cleared it. So the 30% is how high the bodies in your hygiene experiment here, if you wish. So justify why 30% is the right number. You have to perhaps study what the errors are in procedures, what the downstream effects out of using this type of software. If only we'll get the actress within 10% will be able to tell which treatment method would use after that. So you need to do your work here to figure out why 30%, or maybe that's an established benchmark in the field, and you can just site that and continue. The next step is the methods, how will this be done? And so this is what experiments would you perform? This is a detailed procedure is a recipe of the experimental procedure that leads to a set of quantitative measurements. You may use a flow chart to illustrate the procedure, and this will include of course the subject, the user recruitment details. This will give you the full specification that if you hand it to somebody that can follow the instructions, and generate a set of measurements for you. This is the recipe, is how you cook the food. This is how the procedure will happen. And finally, the data analysis of statistical data analysis. How will we know that we have succeeded? How will we analyze our measurements? What are the procedures to be followed? And this should also be boring recipe like okay, you take the input you perform. This should be enough instructions so that somebody could follow them, and come up with the result. So let's talk a little bit about some statistical techniques for demonstrating differences. You'll see things like details, pair T-tests and ANOVA for demonstrating similarity agreement. You'll see things like correlations, cohen's kappa, bland-altman plots. I'm just listing names here for you to look up. I'm not pretending that we're going to cover them. The most important statement if you remember nothing else from this segment. So you want to talk to your favorite statistician ahead of time. So the statistical analysis, remember the power analysis from Week Nine. The number of subjects needs to be specified before as part of the procedure, because if you don't have enough subjects to demonstrate statistical significance. You can spend a lot of money, and time and have something that you cannot use to prove that your device works in some way. I joke sometimes with students that find a friendly statistician, and he'll help you, guide you through the process, and then the statistician will not be so friendly. If you don't talk to them ahead of time, you just give them results at the end, so please analyze that. They may not be particularly friendly, because it's too late for them to help you. You didn't acquire the data that you need for the statistical analysis. So this concludes our discussion on validation. In the remaining two segments of this week's lectures, we'll look at the bargain of the softer life cycle. We look at delivery of the software installation of the software. That's the process of moving the software from the lab to the user sites, and then the final two steps, maintenance and retirement, thank you.