To understand the potential of bias in AI solutions, we will walk through a few real-world examples that demonstrate the problem. Let's start with an example of heart failure and gender bias. Randomized clinical trials of treatment for chronic heart failure, are the basis of clinical assertions, guidelines, and practice patterns. However, not long ago, we didn't know that symptoms for heart attack look different in women compared to men, which lead to differences in cardiovascular mortality rates between women and men. We can develop a model based on these known symptoms of heart attack. Such a model might be used triose patients, presented with heart attack symptoms in the emergency department. However, the problem here is that the model was developed based on male symptoms. The model might be very accurate for identifying males with heart attack symptoms, but it might not work well for females who pursue with different symptoms. Our model is essentially making men's experience of heart attack, as a standard for all heart attacks. We haven't studied women's heart attack symptoms enough at a clinical level, and the levels we're training the model on in many cases aren't accurate with respect to the true phenomenon we're intending to model, heart attack in both men and women. The predictive accuracy when analyzed against the true clinical outcomes, will decline for women but not for men. This bias was identified in the well-known Framingham study of cardiovascular disease. As another example, we can look at genomic databases, used across the research community where we find major racial bias in the genomic samples collected. These databases are the center of precision medicine, where our ability to identify whether a genetic variant is responsible for a given disease, or phenotypic trait depends in part on the confidence in labeling a variant as pathogenic. Now our ability to identify and label gene variants, relies heavily on these publicly available genomic databases. However, research suggests that these databases heavily reflect European ancestry, and in fact are missing major population specific pathogenic information, particularly for Africans, specific pathogenicity data. In fact, a meta-analysis looking at over 2,500 studies from around the world, found that 81 percent of the participants in genome mapping studies were of European descent. This has severe real-world impacts. For example, researchers who use these publicly available data to study disease, are far more likely to use the genomic data of people with European descent than those of African, Asian, Hispanic, or Middle Eastern descent. Therefore, genetic test results from persons of non-European ancestry could be less accurate, more challenging, or simply unattainable. Developing AI models based on these European centric publicly available genomic data sets, could produce biased solutions that are more beneficial to persons of European ancestry, which will make the results from these databases challenging to implement and interpret across broad populations. In a more recent example, emerging evidence suggests that AI driven dermatology, could save thousands of people from skin cancer each year. In general, patients with darker skin present with more advanced skin disease, and have lower survival rates than fair-skinned patients. While there's enthusiasm about the expectation that AI could improve early detection rates for all as it stands, it is possible that only fair skinned populations may benefit because of the lack of inclusion of darker skin patients, in model training and development. Not only is this a problem for pigmented lesions, there is also potential shortcomings for features used to develop and train the AI algorithms, related to lesion location, patient age, and degree of sun damage. While it is suggested, the AI solutions in dermatology will better detect potential cancerous skin lesions compared to dermatologists. The problem is again that the data in the models rely primarily on fair-skinned populations. If the algorithm is basing most of its knowledge on how skin lesions appear on fair skin, then theoretically, lesions on patients of color are less likely to be diagnosed, and therefore benefit from the AI solution. These examples provide you with an idea of how widespread the challenges are related to fairness and bias in AI solutions for healthcare. Next, we will discuss specific types of bias we encounter with AI solutions.