Any discussion of the role of artificial intelligence in healthcare has to carefully distinguish the development of models, algorithms, predictive tools, and so on, from routine uses of those tools in medical practice, whether as a decision aid or as a screening tool. Development of AI is a research activity to identify, develop, and refine tools that may later be useful in providing healthcare. Once particular applications of AI have been validated or found to be acceptable to healthcare providers or payers or systems, the uses of those applications are part of clinical practice, rather than research. So that's the important distinction to know, the distinction between research and practice. Research is an activity that is usually structured, and has its primary goal the production of generalizable knowledge. It's often designed to generate or test a hypothesis, and generally to produce knowledge or develop new interventions. The individuals who are the subjects of research are referred to as research participants, while the individuals who conduct the research are researchers. In contrast, clinical practice is an activity that aims to benefit individual patients. So it's supposed to make people get better, either to help make a diagnosis, prevent disease, provide treatment to them. All these things are trying to help benefit patients. There should be a reasonable expectation of success in the clinical practice that's being undertaken. We'll eventually see ways in which this distinction can be challenging. But a great deal of the modern history of research ethics, as well as most regulatory systems, draws a fairly sharp line between research on the one hand and clinical practice on the other. This boundary was drawn in response to a series of scandals and revelations in the 1960s and 70s that raised concerns about the ability of clinical researchers to self-regulate. The Tuskegee syphilis study, a natural history in which African-American men were denied antibiotics and were misled into thinking that lumbar punctures and other data collection measures were treatments for bad blood, was the most infamous of several problematic studies. In 1966, Harvard anesthesiologist and bioethics pioneer Henry Beecher published a seminal paper in the New England Journal of Medicine that showed that most published clinical trials had made no attempt to obtain informed consent from research participants. Many trials at the time were conducted by clinicians who studied their patients with no consent, and with the roles of researcher and clinician blurred. As a result of these scandals, new guidance for the conduct of research on human subjects was developed. The Nuremberg Code, which had been written in 1947 as part of the aftermath of revelations of unethical Nazi experimentation, had clearly not been sufficient. Given the findings of Beecher and others, it's clear that more needed to be done. The Declaration of Helsinki, first published in 1964, and it's been revised several times since then, became the first major update and included a requirement for independent review of formal research protocols. Many countries have adopted Helsinki as their regulations for research on human subjects. But here in the United States, we did something a little different. In 1974, the National Research Act created a national commission for the protection of human subjects of biomedical and behavioral research. The final report of the commission became known as the famous Belmont Report, and was issued in 1978 and published in the Federal Register in 1979. While there was significant overlap with Helsinki, particularly regarding the concrete recommendations about how to regulate human subjects research, the Belmont report included a very influential ethical framework that provided a foundation for those recommendations. In 1981, the US adopted a regulatory framework largely out of those recommendations, and the Belmont continues to be the basis of research and the regulations that apply today. The Belmont Report articulates three ethical principles that should govern the protection of human subjects research. Others later attempted to show that these same principles play an important role in the ethical conduct of clinical practice as well. The first ethical principle articulated in the Belmont Report is respect for persons. The principle or duty can be broken out into two components. The first is to respect the autonomy of agents who have the capacity to make their own decisions, and the second is to protect individuals with diminished capacity. The second principle of the Belmont Report is beneficence. This obligation can also be broken down into two components. First is a fundamental principle of medical ethics. It's been widely seen as applicable to biomedical research, primum non nocere, or do no harm. More particularly, the second component is that possible benefits should be maximized and the possible harms of research should be minimized as much as possible. The third principle is justice. This requires that there be a fair distribution of the benefits and burdens of research. You don't want one group of individuals being burdened with research for the benefit of others in an unfair fashion. Aristotle's formal principle of justice requires that similar cases should be treated similarly, or put differently, that if two cases or individuals are to be treated differently, there needs to be a relevant difference between them. Together, these principles and their application are the keys to understanding the requirements for ethical development of new AI tools.