So sadly, our journey has now come to an end. Although you can always rewatch this course at anytime, and we can do this all over again too, but in general, we hope that this has been a very educational and hopefully also some an enjoyable ride. Before we sign off, we'll leave you with a few last tips, and we'll discuss a few things for the future. As you wrap up our Introductory Clinical Machine Learning journey, let's just reflect a bit on what we've covered and throw on a few more tips that will hopefully tie some things together for you, and help you in the future. We started this course by discussing what exactly machine learning was, and why it is useful on health care. Then we took a deep dive into the fundamental concepts and principles in machine learning, from how to define machine learning problems, to how to train machine learning models, and also different types of machine learning models, ranging from the simplest regression approaches all the way to complex, modern, and deep learning models. Perhaps, and most importantly, we spent a lot of time on metrics, evaluation, and best practices in clinical machine learning. It's sometimes hard to realize how far you've come, but if you've managed to stay with us the whole time, ideally, you should be really comfortable with the fundamental understandings and can be confident in your ability to participate in designing machine learning applications in many aspects of health care. We continued on by discussing important challenges and potential pitfalls to be aware of, and suggested strategies for handling these. Then finally, we concluded by describing key components of project development and diverse multidisciplinary teams for machine learning and health care, and the human factors and societal impacts of technology like machine learning, has the potential to disrupt entire industries, even health care. In general, we've talked about some important best practices, like beginning with a question and knowing your output action pairing. We also covered the idea that, after finding the best problem to solve, the best state is solve it, than in looking for the solution, even if it's not a machine learning solution, to find the simplest approach. Machine learning is really best for situations where the other approaches don't get you the performance you need, and is useful in many areas of health care. But don't fall into that trap of treating every health care problem like a nail just because you have this new machine learning hammer. We've also covered examples where things can go off track when using some of the more complex machine learning approaches to solve a problem, and this is an excellent argument for shutting simple. It's ideal if your model is only as complex as needed with the fewest parameters needed. Simplicity has benefits, and if you go with the simplest approach you need, you'll likely be happier with the final implementation. It's also important to keep in mind that trying multiple algorithms is a great way to understand the data and the problem, and thankfully, most machine learning toolkits support multiple types of algorithms, so that once your data is clean and organized, it's easier to test out a few. That's related to the initial data approach that we recommended, which is related to common biostatistical project designs. In contrast to a brute force data approach, which might lead you to first attempt to build a huge data empire. Instead, we found it best to start with a minimal set of data and to learn based on the metrics, how to drive a decision about how much data you ultimately need. We also mentioned the ways that your data can lead to bad models, bad interpretations, bad correlations, or just can be planed terrible. One of the ways we find to be aware of these issues along the way, is to always be skeptical of your data, your model in any metric numbers no matter how bad they are, but especially if they turn out well. Try to dig into the data with your team, take a close look at the false positives and false negatives in detail, and understand with error analyses with your team to look for patterns examined for systemic biases, and really try to explain why the model fails in certain circumstances. This is ultimately one of the most important attributes of a successful project, a robust multidisciplinary team, and a dedicated effort towards uncovering biases and flaws in the model. Another approach we recommended is to work towards acquiring external data sets, if it makes sense, for the prediction output pairing, provided you can line up the labels and data types because it's an incredibly useful way to get a true sense of how your model performance holds out on the ultimate holdout test set from another institution. We also covered a lot in prior modules about important tasks and metrics for validation, and taking time to ensure that the test set is as free from noise as possible, is independent, and is truly providing all the information needed to make decisions on performance. This is an absolute must. But one more important point about model evaluation and the labels in the test set to consider, is the idea of important and rare, hidden, or stratified data labels subclasses, which, although they're rare and health care, are often also catastrophic and potentially even deadly to miss. That's right. For example, if a model train to detect bleeding in the brain on head cut scans, performs really well on a test set that is labeled with bleed or no bleed, and not each individual type of bleed, your model may actually achieve really good metric performance. However, if there are rare subtypes of brain bleeds, such as subarachnoid hemorrhage, that were not included in your test set, label with the bleed positive cases, which is likely to happen because these are rare and the test set is typically randomly sampled, then the model may show excellent performance for the overall task of detecting brain bleeding, but you would not know how well the model would perform on the rare but arguably even more important and dangerous brain bleed subtypes, and chances are, because it looks different from the more common types of brain bleeds, it would probably miss this important subtype entirely. There are a long tail of potential rare subclasses of diseases throughout medicine that may not and some will never make it into the test set due to random chance. So as a result, no matter what the output action pair of the model is, assumptions can be made about performance that neglects important but rare labels. It's important to remember there's no such thing as a random set of data, only a random process to generate the holdout data. After all, if you randomly flip seven coins, and they all come up heads, that won't really work as a viable validation for your model. So making sure that the test set is statistically sampled and free of biases is important, but also that it contains the important labels for your use case is something to consider, even when you're model appears to be doing very well for the task. We also discussed with you all kinds of other errors and how incorporating multidisciplinary teams with feedback loops to catch and improve on those errors will help you. This is especially true when designing ways to gather feedback and bring it back into model training. So we'd like to leave you with these lasting tips as we wrap up this introductory course for Machine Learning in Health Care. We tried to touch on a broader array of the most important concepts and principles, and hopefully this has helped you level up your understanding and confidence, and maybe even peaked and inspired your interest further to explore any number of these topics in further depth. This course is really a labor of love on our part and our attempt to share with you an overview of all of the things we would have wanted to know before we started our own journeys into clinical machine learning research and applications in our careers. Our sincere hope in making this available to you is that, more and more stake all over the world like you, will take what we have shared with you in this course and go on to advance the field on your own, in your own area, in your own domain, and really inspire others in the process, and in that way, we hope that we can impact and hopefully improve health care for everyone.