Up to now, we've only periodically mentioned the importance of asking the right question up front and a general framework for planning out the main task needed. But now we're going to focus to round this out a little bit and talk about clinical machine learning project development. The name of the game here is to find problems worth solving. This is important because while there's plenty potential to solve a given healthcare-related problem, it doesn't always mean that it's a worthwhile problem to solve. In other words, just because you can, doesn't necessarily mean that you should. After all, we are talking about medical problems here and not math problems. One of the ways to understand the value of the potential solution is to consider all or at least as many as you can think of actions and repercussions that would come from a particular solution. This is where having input from multiple stakeholders and domain experts is really important. The sentiment driving your question really should be what action will or will not be taken as a result of the successful model output? The answer to this will depend on the type of model output, the clinical context, available data, and the final analysis. On the screen is a three-by-three table that tries to break down common application settings and type of model output into three broad categories each. Using this table will help you consider the purpose of your model and the context a little bit more systematically. For potential applications of a model, we consider a few things. Is it a scientific exploration or discovery? Is it a clinical care or decision support system? Or is it about care delivery or managing a medical practice? For the three categories of model output, we think about classification, prediction, and recommendation. Now, some or even many applications may fall into one or more of these categories. But what we like to do is at least try to go through the effort to think about this table and these categories to guide our thinking, where ideas fit in, where to formulate our question in more specific deliberate terms. Let's all dive a little bit deeper into the three-by-three table by reviewing some possibilities. Will your model classify something? For example, a scientific discovery question might be classifying patients with subtypes of heart failure and preserve injection fraction. Or a classification in practice might be finding out which patients are at high risk for thromboembolism. Classification and healthcare delivery could be which patients are dissatisfied with care or which practitioners are burnt out. So will your model predict something? An example here in the scientific discovery category might be estimating future disease risk based on genomic profiling or a prediction objective in the practice application category might be predicting which patients are at risk of dying in the next three months. A prediction output for the delivery of care would be determining who will be a no-show in a clinic, or will your model recommend a specific action? A scientific exploration here might be recommending a chemotherapy combination for certain tumors based on biomarker and patient data. An action recommendation model in the practice application category might suggest an antibiotic regiment based on the patterns of pneumonia on chest x-rays. A recommendation model in the healthcare delivery category would be one that adjusts ICU nurse staffing, for example. So once you have a sense of what general categories your main objective fits into, predictive, practice classification, clinical application, etc, you can now use that information from that to start thinking about whether the problem is worth solving by analyzing the action that results from the output, something that we will refer to as the output action pairing or what as cool people call the OAP. Those of you who are experienced with project management may be very familiar with various forms of this tactic. The origin comes essentially from this classic universal management advice and maybe for life too, which is to always begin with the end in mind. Though that concept is so common that it's practically cliche, don't be fooled by its simplicity and then ignore this throughout your exercise process through your own clinical machine learning project. We like to use the OAP to gain a clear understanding about the output of the model that we're building, and then think carefully about whether it's worth solving based on what available actions we would take as a result of the decision. This is not only to consider a correct decision by the model, but also considerations of an incorrect decision. What if it's a false positive or false negative? What effect that output would have on the action? Again, this exercise should be done for all possible model outputs, correct and incorrect, and refer to the confusion matrix can help here. This analysis should lead to a clear understanding of the utility of the model in the clinical setting. This assumes that you've achieved building the model that you wanted to and is now asking the question, so then what? If you cannot clearly articulate the OAP, the approach you're using, it's probably not machine learning, but instead you need to first do more exploration of your data and data mining analytics. We would argue that doing this effectively is way more important to the final outcome of the application than recruiting or becoming the best computer scientist, the most advanced algorithms at your disposal, or having a perfect data set, really anything else. Of course, it tends to be easier to articulate this if your primary output label is something that you already know the answer to or already need in your current practice so that you have a reference and some intuition to compare your results with and how they might be used in a practice. Hopefully we've made it clear how important it is to spend deliberate time and effort considering the final implementation of the model before you start. You can use the OAP and these other categorization frameworks to help guide you as to whether the problem that a given machine learning solution is addressing is really worth solving. Again, whether you choose this table or any of these processes we've suggested or not, what really matters most is that you plan to consider these elements prior to beginning. Whether it's building your own model or even adopting a commercial solution.