You would find epidemiological knowledge essential when designing and conducting research studies. However, we review and critique studies designed and executed by others much more frequently, either for research purposes or in order to make clinical and policy decisions. When critiquing epidemiological studies, you will often hear or read about concepts such as validity and bias which determine whether the results of a study are relevant and should be trusted or not. In this lecture, I will introduce these concepts which we'll discuss in more detail later. When critiquing a particular study, there are some key questions that you would consider. One of these is whether any inferences arising from it are valid for the source population of this study. For example, a study may report an association between a new drug and improved survival among male cancer patients in a university hospital. There are many reasons why this could not reflect the truth such as flaws in the design or the execution of the study. But if we believe that this association truly exists among this group of patients, then we say that this is a study with internal validity. Another equally important question is whether these inferences are applicable to individuals outside the source population. Internal validity is a prerequisite for this. If we don't think the results reflect the truth in the source population, discussing if they can be generalized to other groups of people is pointless. But let's assume that taking this new drug is in fact associated with improved survival among male cancer patients in the university hospital where the study was conducted, and the researchers have done an excellent job showing this. We would say that the study has external validity if we believe that this finding can be applicable to other groups of cancer patients, female patients in the same hospital or patients treated in different settings and countries. External validity sometimes referred to as generalisability and largely determines the real life impact of a certain finding beyond the specific setting where the research was conducted. Closely linked to validity is the concept of bias. Simply put, an inference is valid when there is no bias. According to one popular definition, bias is any trend in the collection, analysis, interpretation, publication, or review of data that can lead to conclusions that are systematically different from the truth. The key word here is systematically. A systematic error in the design and conduct of the study can result in bias which means that the observed results may be different from the truth. All measures, even simpler ones such as weight or height are imprecise to certain extent, therefore, some error is always expected. But it is crucial to make the distinction between random and systematic error. Let's assume that this is a distribution of height in the population. If sometimes, we overestimate, and sometimes, underestimate height when we measure it due to chance, we have random error and the measurements might end up looking like this. As you may infer, this is a problem because it leads to imprecision, but with a large enough sample, we can be quite confident that errors cancel each other out. In contrast, if we systematically overestimate height, the distribution of our measurements will look like this and will lead to inaccurate results regardless of the sample size. In conclusion, systematic error can introduce bias in the study which in turn hurts its validity. Bias can take many forms, and scientists have identified many types of bias and their variations over the years. To make things more difficult, there are myriad different classifications and names for bias observed in epidemiological studies. For the purpose of this course however, we will consider three broad categories of bias, selection bias, information bias, and confounding.